Showing posts with label Superintelligence. Show all posts
Showing posts with label Superintelligence. Show all posts

AI - What Is Superintelligence AI? Is Artificial Superintelligence Possible?

 


 

In its most common use, the phrase "superintelligence" refers to any degree of intelligence that at least equals, if not always exceeds, human intellect, in a broad sense.


Though computer intelligence has long outperformed natural human cognitive capacity in specific tasks—for example, a calculator's ability to swiftly interpret algorithms—these are not often considered examples of superintelligence in the strict sense due to their limited functional range.


In this sense, superintelligence would necessitate, in addition to artificial mastery of specific theoretical tasks, some kind of additional mastery of what has traditionally been referred to as practical intelligence: a generalized sense of how to subsume particulars into universal categories that are in some way worthwhile.


To this day, no such generalized superintelligence has manifested, and hence all discussions of superintelligence remain speculative to some degree.


Whereas traditional theories of superintelligence have been limited to theoretical metaphysics and theology, recent advancements in computer science and biotechnology have opened up the prospect of superintelligence being materialized.

Although the timing of such evolution is hotly discussed, a rising body of evidence implies that material superintelligence is both possible and likely.


If this hypothesis is proved right, it will very certainly be the result of advances in one of two major areas of AI research


  1. Bioengineering 
  2. Computer science





The former involves efforts to not only map out and manipulate the human DNA, but also to exactly copy the human brain electronically through full brain emulation, also known as mind uploading.


The first of these bioengineering efforts is not new, with eugenics programs reaching back to the seventeenth century at the very least.

Despite the major ethical and legal issues that always emerge as a result of such efforts, the discovery of DNA in the twentieth century, together with advances in genome mapping, has rekindled interest in eugenics.

Much of this study is aimed at gaining a better understanding of the human brain's genetic composition in order to manipulate DNA code in the direction of superhuman intelligence.



Uploading is a somewhat different, but still biologically based, approach to superintelligence that aims to map out neural networks in order to successfully transfer human intelligence onto computer interfaces.


  • The brains of insects and tiny animals are micro-dissected and then scanned for thorough computer analysis in this relatively new area of study.
  • The underlying premise of whole brain emulation is that if the brain's structure is better known and mapped, it may be able to copy it with or without organic brain tissue.



Despite the fast growth of both genetic mapping and whole brain emulation, both techniques have significant limits, making it less likely that any of these biological approaches will be the first to attain superintelligence.





The genetic alteration of the human genome, for example, is constrained by generational constraints.

Even if it were now feasible to artificially boost cognitive functioning by modifying the DNA of a human embryo (which is still a long way off), it would take an entire generation for the changed embryo to evolve into a fully fledged, superintelligent human person.

This would also imply that there are no legal or moral barriers to manipulating the human DNA, which is far from the fact.

Even the comparatively minor genetic manipulation of human embryos carried done by a Chinese physician as recently as November 2018 sparked international outrage (Ramzy and Wee 2019).



Whole brain emulation, on the other hand, is still a long way off, owing to biotechnology's limits.


Given the current medical technology, the extreme levels of accuracy necessary at every step of the uploading process are impossible to achieve.

Science and technology currently lack the capacity to dissect and scan human brain tissue with sufficient precision to produce full brain simulation results.

Furthermore, even if such first steps are feasible, researchers would face significant challenges in analyzing and digitally replicating the human brain using cutting-edge computer technology.




Many analysts believe that such constraints will be overcome, although the timeline for such realizations is unknown.



Apart from biotechnology, the area of AI, which is strictly defined as any type of nonorganic (particularly computer-based) intelligence, is the second major path to superintelligence.

Of course, the work of creating a superintelligent AI from the ground up is complicated by a number of elements, not all of which are purely logistical in nature, such as processing speed, hardware/software design, finance, and so on.

In addition to such practical challenges, there is a significant philosophical issue: human programmers are unable to know, and so cannot program, that which is superior to their own intelligence.





Much contemporary research on computer learning and interest in the notion of a seed AI is motivated in part by this worry.


Any machine capable of changing reactions to stimuli based on an examination of how well it performs in relation to a predetermined objective is defined as the latter.

Importantly, the concept of a seed AI entails not only the capacity to change its replies by extending its base of content knowledge (stored information), but also the ability to change the structure of its programming to better fit a specific job (Bostrom 2017, 29).

Indeed, it is this latter capability that would give a seed AI what Nick Bostrom refers to as "recursive self-improvement," or the ability to evolve iteratively (Bostrom 2017, 29).

This would eliminate the requirement for programmers to have an a priori vision of super intelligence since the seed AI would constantly enhance its own programming, with each more intelligent iteration writing a superior version of itself (beyond the human level).

Such a machine would undoubtedly cast doubt on the conventional philosophical assumption that robots are incapable of self-awareness.

This perspective's proponents may be traced all the way back to Descartes, but they also include more current thinkers like John Haugeland and John Searle.



Machine intelligence, in this perspective, is defined as the successful correlation of inputs with outputs according to a predefined program.




As a result, robots differ from humans in type, the latter being characterized only by conscious self-awareness.

Humans are supposed to comprehend the activities they execute, but robots are thought to carry out functions mindlessly—that is, without knowing how they work.

Should it be able to construct a successful seed AI, this core idea would be forced to be challenged.

The seed AI would demonstrate a level of self-awareness and autonomy not readily explained by the Cartesian philosophical paradigm by upgrading its own programming in ways that surprise and defy the forecasts of its human programmers.

Indeed, although it is still speculative (for the time being), the increasingly possible result of superintelligent AI poses a slew of moral and legal dilemmas that have sparked a lot of philosophical discussion in this subject.

The main worries are about the human species' security in the case of what Bostrom refers to as a "intelligence explosion"—that is, the creation of a seed AI followed by a possibly exponential growth in intellect (Bostrom 2017).



One of the key problems is the inherently unexpected character of such a result.


Humans will not be able to totally foresee how superintelligent AI would act due to the autonomy entailed by superintelligence in a definitional sense.

Even in the few cases of specialized superintelligence that humans have been able to construct and study so far—for example, robots that have surpassed humans in strategic games like chess and Go—human forecasts for AI have shown to be very unreliable.

For many critics, such unpredictability is a significant indicator that, should more generic types of superintelligent AI emerge, humans would swiftly lose their capacity to manage them (Kissinger 2018).





Of all, such a loss of control does not automatically imply an adversarial relationship between humans and superintelligence.


Indeed, although most of the literature on superintelligence portrays this relationship as adversarial, some new work claims that this perspective reveals a prejudice against machines that is particularly prevalent in Western cultures (Knight 2014).

Nonetheless, there are compelling grounds to believe that superintelligent AI would at the very least consider human goals as incompatible with their own, and may even regard humans as existential dangers.

For example, computer scientist Steve Omohundro has claimed that even a relatively basic kind of superintelligent AI like a chess bot would have motive to want the extinction of humanity as a whole—and may be able to build the tools to do it (Omohundro 2014).

Similarly, Bostrom has claimed that a superintelligence explosion would most certainly result in, if not the extinction of the human race, then at the very least a gloomy future (Bostrom 2017).

Whatever the benefits of such theories, the great uncertainty entailed by superintelligence is obvious.

If there is one point of agreement in this large and diverse literature, it is that if AI research is to continue, the global community must take great care to protect its interests.





Hardened determinists who claim that technological advancement is so tightly connected to inflexible market forces that it is simply impossible to change its pace or direction in any major manner may find this statement contentious.


According to this determinist viewpoint, if AI can deliver cost-cutting solutions for industry and commerce (as it has already started to do), its growth will proceed into the realm of superintelligence, regardless of any unexpected negative repercussions.

Many skeptics argue that growing societal awareness of the potential risks of AI, as well as thorough political monitoring of its development, are necessary counterpoints to such viewpoints.


Bostrom highlights various examples of effective worldwide cooperation in science and technology as crucial precedents that challenge the determinist approach, including CERN, the Human Genome Project, and the International Space Station (Bostrom 2017, 253).

To this, one may add examples from the worldwide environmental movement, which began in the 1960s and 1970s and has imposed significant restrictions on pollution committed in the name of uncontrolled capitalism (Feenberg 2006).



Given the speculative nature of superintelligence research, it is hard to predict what the future holds.

However, if superintelligence poses an existential danger to human existence, caution would dictate that a worldwide collaborative strategy rather than a free market approach to AI be used.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Berserkers; Bostrom, Nick; de Garis, Hugo; General and Narrow AI; Goertzel, Ben; Kurzweil, Ray; Moravec, Hans; Musk, Elon; Technological Singularity; Yudkowsky, Eliezer.



References & Further Reading:


  • Bostrom, Nick. 2017. Superintelligence: Paths, Dangers, Strategies. Oxford, UK: Oxford University Press.
  • Feenberg, Andrew. 2006. “Environmentalism and the Politics of Technology.” In Questioning Technology, 45–73. New York: Routledge.
  • Kissinger, Henry. 2018. “How the Enlightenment Ends.” The Atlantic, June 2018. https://www.theatlantic.com/magazine/archive/2018/06/henry-kissinger-ai-could-mean-the-end-of-human-history/559124/.
  • Knight, Heather. 2014. How Humans Respond to Robots: Building Public Policy Through Good Design. Washington, DC: The Project on Civilian Robotics. Brookings Institution.
  • Omohundro, Steve. 2014. “Autonomous Technology and the Greater Human Good.” Journal of Experimental & Theoretical Artificial Intelligence 26, no. 3: 303–15.
  • Ramzy, Austin, and Sui-Lee Wee. 2019. “Scientist Who Edited Babies’ Genes Is Likely to Face Charges in China.” The New York Times, January 21, 2019



Artificial Intelligence - Who Is Aaron Sloman?

 




Aaron Sloman (1936–) is a renowned artificial intelligence and cognitive science philosopher.

He is a global expert in the evolution of biological information processing, an area of study that seeks to understand how animal species have acquired cognitive levels that surpass technology.

He's been debating if evolution was the first blind mathematician and whether weaver birds are actually capable of recursion in recent years (dividing a problem into parts to conquer it).

His present Meta-Morphogenesis Project is based on an idea by Alan Turing (1912–1954), who claimed that although computers could do mathematical brilliance, only brains could perform mathematical intuition.

According to Sloman, not every aspect of the cosmos, including the human brain, can be represented in a sufficiently massive digital computer because of this.

This assertion clearly contradicts digital physics, which claims that the universe may be characterized as a simulation running on a sufficiently big and fast general-purpose computer that calculates the cosmos's development.

Sloman proposes that the universe has developed its own biological building kits for creating and deriving other—different and more sophisticated—construction kits, similar to how scientists have evolved, accumulated, and applied increasingly complex mathematical knowledge via mathematics.

He refers to this concept as the Self-Informing Universe, and suggests that scientists build a multi-membrane Super-Turing machine that runs on subneural biological chemistry.

Sloman was born to Jewish Lithuanian immigrants in Southern Rhodesia (now Zimbabwe).

At the University of Cape Town, he got a bachelor's degree in Mathematics and Physics.

He was awarded a Rhodes Scholarship and earned his PhD in philosophy from Oxford University, where he defended Immanuel Kant's mathematical concepts.

He saw that artificial intelligence had promise as the way forward in philosophical understanding of the mind as a visiting scholar at Edinburgh University in the early 1970s.

He said that using Kant's recommendations as a starting point, a workable robotic toy baby could be created, which would eventually develop in intellect and become a mathematician on par with Archimedes or Zeno.

He was one of the first scholars to refute John McCarthy's claim that a computer program capable of operating intelligently in the real world must use structured, logic-based ideas.

Sloman was one of the founding members of the University of Sussex School of Cognitive and Computer Sciences.

There, he collaborated with Margaret Boden and Max Clowes to advance artificial intelligence instruction and research.

This effort resulted in the commercialization of the widely used Poplog AI teaching system.

Sloman's The Computer Revolution in Philosophy (1978) is famous for being one of the first to recognize that metaphors from the realm of computers (for example, the brain as a data storage device and thinking as a collection of tools) will dramatically alter how we think about ourselves.

The epilogue of the book contains observations on the near impossibility of AI sparking the Singularity and the likelihood of a human Society for the Liberation of Robots to address possible future brutal treatment of intelligent machines.

Sloman held the Artificial Intelligence and Cognitive Science chair in the School of Computer Science at the University of Birmingham until his formal retirement in 2002.

He is a member of the Alan Turing Institute and the Association for the Advancement of Artificial Intelligence.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Superintelligence; Turing, Alan.


References & Further Reading:


Sloman, Aaron. 1962. “Knowing and Understanding: Relations Between Meaning and Truth, Meaning and Necessary Truth, Meaning and Synthetic Necessary Truth.” D. Phil., Oxford University.

Sloman, Aaron. 1971. “Interactions between Philosophy and AI: The Role of Intuition and Non-Logical Reasoning in Intelligence.” Artificial Intelligence 2: 209–25.

Sloman, Aaron. 1978. The Computer Revolution in Philosophy: Philosophy, Science, and Models of Mind. Terrace, Hassocks, Sussex, UK: Harvester Press.

Sloman, Aaron. 1990. “Notes on Consciousness.” AISB Quarterly 72: 8–14.

Sloman, Aaron. 2018. “Can Digital Computers Support Ancient Mathematical Conscious￾ness?” Information 9, no. 5: 111.



Artificial Intelligence - Who Is Steve Omohundro?

 




In the field of artificial intelligence, Steve Omohundro  (1959–) is a well-known scientist, author, and entrepreneur.

He is the inventor of Self-Aware Systems, the chief scientist of AIBrain, and an adviser to the Machine Intelligence Research Institute (MIRI).

Omohundro is well-known for his insightful, speculative studies on the societal ramifications of AI and the safety of smarter-than-human computers.

Omohundro believes that a fully predictive artificial intelligence science is required.

He thinks that if goal-driven artificial general intelligences are not carefully created in the future, they would likely generate negative activities, cause conflicts, or even lead to the extinction of humanity.

Indeed, Omohundro argues that AIs with inadequate programming might act psychopathically.

He claims that programmers often create flaky software and programs that "manipulate bits" without knowing why.

Omohundro wants AGIs to be able to monitor and comprehend their own operations, spot flaws, and rewrite themselves to improve performance.

This is what genuine machine learning looks like.

The risk is that AIs may evolve into something that humans will be unable to comprehend, make incomprehensible judgments, or have unexpected repercussions.

As a result, Omohundro contends, artificial intelligence must evolve into a discipline that is more predictive and anticipatory.

Omohundro also suggests in "The Nature of Self-Improving Artificial Intelligence," one of his widely available online papers, that a future self-aware system that will most likely access the internet will be influenced by the scientific papers it reads, which recursively justifies writing the paper in the first place.

AGI agents must be programmed with value sets that drive them to pick objectives that benefit mankind as they evolve.

Self-improving systems like the ones Omohundro is working on don't exist yet.

Inventive minds, according to Omohundro, have only produced inert systems (chairs and coffee mugs), reactive systems (mousetraps and thermostats), adaptive systems (advanced speech recognition systems and intelligent virtual assistants), and deliberative systems (advanced speech recognition systems and intelligent virtual assistants) (the Deep Blue chess-playing computer).

Self-improving systems, as described by Omohundro, would have to actively think and make judgments in the face of uncertainty regarding the effects of self-modification.

The essential natures of self-improving AIs, according to Omohundro, may be understood as rational agents, a notion he draws from microeconomic theory.

Because humans are only imperfectly rational, the discipline of behavioral economics has exploded in popularity in recent decades.

AI agents, on the other hand, must eventually establish logical objectives and preferences ("utility functions") that sharpen their ideas about their surroundings due to their self-improving cognitive architectures.

These beliefs will then assist them in forming new aims and preferences.

Omohundro draws influence from mathematician John von Neumann and economist Oskar Morgenstern's contributions to the anticipated utility hypothesis.

Completeness, transitivity, continuity, and independence are the axioms of rational behavior proposed by von Neumann and Morgenstern.

For artificial intelligences, Omohundro proposes four "fundamental drives": efficiency, self-preservation, resource acquisition, and creativity.

These motivations are expressed as "behaviors" by future AGIs with self-improving, rational agency.

Both physical and computational operations are included in the efficiency drive.

Artificial intelligences will strive to make effective use of limited resources such as space, mass, energy, processing time, and computer power.

To prevent losing resources to other agents and enhance goal fulfillment, the self-preservation drive will use powerful artificial intelligences.

A passively behaving artificial intelligence is unlikely to survive.

The acquisition drive is the process of locating new sources of resources, trading for them, cooperating with other agents, or even stealing what is required to reach the end objective.

The creative drive encompasses all of the innovative ways in which an AGI may boost anticipated utility in order to achieve its many objectives.

This motivation might include the development of innovative methods for obtaining and exploiting resources.

Signaling, according to Omohundro, is a singular human source of creative energy, variation, and divergence.

Humans utilize signaling to express their intentions regarding other helpful tasks they are doing.

If A is more likely to be true when B is true than when B is false, then A signals B.

Employers, for example, are more likely to hire potential workers who are enrolled in a class that looks to offer benefits that the company desires, even if this is not the case.

The fact that the potential employee is enrolled in class indicates to the company that he or she is more likely to learn useful skills than the candidate who is not.

Similarly, a billionaire does not need to gift another billionaire a billion dollars to indicate that they are among the super-wealthy.

A huge bag containing several million dollars could suffice.

Omohundro's notion of fundamental AI drives was included into Oxford philosopher Nick Bostrom's instrumental convergence thesis, which claims that a few instrumental values are sought in order to accomplish an ultimate objective, often referred to as a terminal value.

Self-preservation, goal content integrity (retention of preferences over time), cognitive improvement, technical perfection, and resource acquisition are among Bostrom's instrumental values (he prefers not to call them drives).

Future AIs might have a reward function or a terminal value of optimizing some utility function.

Omohundro wants designers to construct artificial general intelligence with kindness toward people as its ultimate objective.

Military conflicts and economic concerns, on the other hand, he believes, make the development of destructive artificial general intelligence more plausible.

Drones are increasingly being used by military forces to deliver explosives and conduct surveillance.

He also claims that future battles will almost certainly be informational in nature.

In a future where cyberwar is a possibility, a cyberwar infrastructure will be required.

Energy encryption, a unique wireless power transmission method that scrambles energy so that it stays safe and cannot be exploited by rogue devices, is one way to counter the issue.

Another area where information conflict is producing instability is the employment of artificial intelligence in fragile financial markets.

Digital cryptocurrencies and crowdsourcing marketplace systems like Mechanical Turk are ushering in a new era of autonomous capitalism, according to Omohundro, and we are unable to deal with the repercussions.

Omohundro has spoken about the need for a complete digital provenance for economic and cultural recordkeeping to prevent AI deception, fakery, and fraud from overtaking human society as president of the company Possibility Research, advocate of a new cryptocurrency called Pebble, and advisory board member of the Institute for Blockchain Studies.

In order to build a verifiable "blockchain civilization based on truth," he suggests that digital provenance methods and sophisticated cryptography techniques monitor autonomous technology and better check the history and structure of any alterations being performed.

Possibility Smart technologies that enhance computer programming, decision-making systems, simulations, contracts, robotics, and governance are the focus of research.

Omohundro has advocated for the creation of so-called Safe AI scaffolding solutions to counter dangers in recent years.

The objective is to create self-contained systems that already have temporary scaffolding or staging in place.

The scaffolding assists programmers who are assisting in the development of a new artificial general intelligence.

The virtual scaffolding may be removed after the AI has been completed and evaluated for stability.

The initial generation of restricted safe systems created in this manner might be used to develop and test less constrained AI agents in the future.

Utility functions aligned with agreed-upon human philosophical imperatives, human values, and democratic principles would be included in advanced scaffolding.

Self-improving AIs may eventually have inscribed the Universal Declaration of Human Rights or a Universal Constitution into their fundamental fabric, guiding their growth, development, choices, and contributions to mankind.

Omohundro graduated from Stanford University with degrees in mathematics and physics, as well as a PhD in physics from the University of California, Berkeley.

In 1985, he co-created StarLisp, a high-level programming language for the Thinking Machines Corporation's Connection Machine, a massively parallel supercomputer in construction.

On differential and symplectic geometry, he wrote the book Geometric Perturbation Theory in Physics (1986).

He was an associate professor of computer science at the University of Illinois in Urbana-Champaign from 1986 to 1988.

He cofounded the Center for Complex Systems Research with Stephen Wolfram and Norman Packard.

He also oversaw the university's Vision and Learning Group.

He created the Mathematica 3D graphics system, which is a symbolic mathematical calculation application.

In 1990, he led an international team at the University of California, Berkeley's International Computer Science Institute (ICSI) to develop Sather, an object-oriented, functional programming language.

Automated lip-reading, machine vision, machine learning algorithms, and other digital technologies have all benefited from his work.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


General and Narrow AI; Superintelligence.



References & Further Reading:



Bostrom, Nick. 2012. “The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents.” Minds and Machines 22, no. 2: 71–85.

Omohundro, Stephen M. 2008a. “The Basic AI Drives.” In Proceedings of the 2008 Conference on Artificial General Intelligence, 483–92. Amsterdam: IOS Press.

Omohundro, Stephen M. 2008b. “The Nature of Self-Improving Artificial Intelligence.” https://pdfs.semanticscholar.org/4618/cbdfd7dada7f61b706e4397d4e5952b5c9a0.pdf.

Omohundro, Stephen M. 2012. “The Future of Computing: Meaning and Values.” https://selfawaresystems.com/2012/01/29/the-future-of-computing-meaning-and-values.

Omohundro, Stephen M. 2013. “Rational Artificial Intelligence for the Greater Good.” In Singularity Hypotheses: A Scientific and Philosophical Assessment, edited by Amnon Eden, Johnny Søraker, James H. Moor, and Eric Steinhart, 161–79. Berlin: Springer.

Omohundro, Stephen M. 2014. “Autonomous Technology and the Greater Human Good.” Journal of Experimental and Theoretical Artificial Intelligence 26, no. 3: 303–15.

Shulman, Carl. 2010. Omohundro’s ‘Basic AI Drives’ and Catastrophic Risks. Berkeley, CA: Machine Intelligence Research Institute




Artificial Intelligence - Who Is Elon Musk?

 




Elon Musk (1971–) is an American businessman and inventor.

Elon Musk is an engineer, entrepreneur, and inventor who was born in South Africa.

He is a dual citizen of South Africa, Canada, and the United States, and resides in California.

Musk is widely regarded as one of the most prominent inventors and engineers of the twenty-first century, as well as an important influencer and contributor to the development of artificial intelligence.

Despite his controversial personality, Musk is widely regarded as one of the most prominent inventors and engineers of the twenty-first century and an important influencer and contributor to the development of artificial intelligence.

Musk's business instincts and remarkable technological talent were evident from an early age.

By the age of 10, he had self-taught himself how program computers, and by the age of twelve, he had produced a video game and sold the source code to a computer magazine.

Musk has included allusions to some of his favorite novels in SpaceX's Falcon Heavy rocket launch and Tesla's software since he was a youngster.

Musk's official schooling was centered on economics and physics rather than engineering, interests that are mirrored in his subsequent work, such as his efforts in renewable energy and space exploration.

He began his education at Queen's University in Canada, but later transferred to the University of Pennsylvania, where he earned bachelor's degrees in Economics and Physics.

Musk barely stayed at Stanford University for two days to seek a PhD in energy physics before departing to start his first firm, Zip2, with his brother Kimbal Musk.


Musk has started or cofounded many firms, including three different billion-dollar enterprises: SpaceX, Tesla, and PayPal, all driven by his diverse interests and goals.


• Zip2 was a web software business that was eventually purchased by Compaq.

• X.com: an online bank that merged with PayPal to become the online payments corporation PayPal.

• Tesla, Inc.: an electric car and solar panel maker 

• SpaceX: a commercial aircraft manufacturer and space transportation services provider (via its subsidiarity SolarCity) 

• Neuralink: a neurotechnology startup focusing on brain-computer connections 

• The Boring Business: an infrastructure and tunnel construction corporation

 • OpenAI: a nonprofit AI research company focused on the promotion and development of friendly AI Musk is a supporter of environmentally friendly energy and consumption.


Concerns over the planet's future habitability prompted him to investigate the potential of establishing a self-sustaining human colony on Mars.

Other projects include the Hyperloop, a high-speed transportation system, and the Musk electric jet, a jet-powered supersonic electric aircraft.

Musk sat on President Donald Trump's Strategy and Policy Forum and Manufacturing Jobs Initiative for a short time before stepping out when the US withdrew from the Paris Climate Agreement.

Musk launched the Musk Foundation in 2002, which funds and supports research and activism in the domains of renewable energy, human space exploration, pediatric research, and science and engineering education.

Musk's effect on AI is significant, despite his best-known work with Tesla and SpaceX, as well as his contentious social media pronouncements.

In 2015, Musk cofounded the charity OpenAI with the objective of creating and supporting "friendly AI," or AI that is created, deployed, and utilized in a manner that benefits mankind as a whole.

OpenAI's objective is to make AI open and accessible to the general public, reducing the risks of AI being controlled by a few privileged people.

OpenAI is especially concerned about the possibility of Artificial General Intelligence (AGI), which is broadly defined as AI capable of human-level (or greater) performance on any intellectual task, and ensuring that any such AGI is developed responsibly, transparently, and distributed evenly and openly.

OpenAI has had its own successes in taking AI to new levels while staying true to its goals of keeping AI friendly and open.

In June of 2018, a team of OpenAI-built robots defeated a human team in the video game Dota 2, a feat that could only be accomplished through robot teamwork and collaboration.

Bill Gates, a cofounder of Microsoft, praised the achievement on Twitter, calling it "a huge milestone in advancing artificial intelligence" (@BillGates, June 26, 2018).

Musk resigned away from the OpenAI board in February 2018 to prevent any conflicts of interest while Tesla advanced its AI work for autonomous driving.

Musk became the CEO of Tesla in 2008 after cofounding the company in 2003 as an investor.

Musk was the chairman of Tesla's board of directors until 2018, when he stepped down as part of a deal with the US Securities and Exchange Commission over Musk's false claims about taking the company private.

Tesla produces electric automobiles with self-driving capabilities.

Tesla Grohmann Automation and Solar City, two of its subsidiaries, offer relevant automotive technology and manufacturing services and solar energy services, respectively.

Tesla, according to Musk, will reach Level 5 autonomous driving capabilities in 2019, as defined by the National Highway Traffic Safety Administration's (NHTSA) five levels of autonomous driving.

Tes la's aggressive development with autonomous driving has influenced conventional car makers' attitudes toward electric cars and autonomous driving, and prompted a congressional assessment of how and when the technology should be regulated.

Musk is widely credited as a key influencer in moving the automotive industry toward autonomous driving, highlighting the benefits of autonomous vehicles (including reduced fatalities in vehicle crashes, increased worker productivity, increased transportation efficiency, and job creation) and demonstrating that the technology is achievable in the near term.

Tesla's autonomous driving code has been created and enhanced under the guidance of Musk and Tesla's Director of AI, Andrej Karpathy (Autopilot).

The computer vision analysis used by Tesla, which includes an array of cameras on each car and real-time image processing, enables the system to make real-time observations and predictions.

The cameras, as well as other exterior and internal sensors, capture a large quantity of data, which is evaluated and utilized to improve Autopilot programming.

Tesla is the only autonomous car maker that is opposed to the LIDAR laser sensor (an acronym for light detection and ranging).

Tesla uses cameras, radar, and ultrasonic sensors instead.

Though academics and manufacturers disagree on whether LIDAR is required for fully autonomous driving, the high cost of LIDAR has limited Tesla's rivals' ability to produce and sell vehicles at a pricing range that allows a large number of cars on the road to gather data.

Tesla is creating its own AI hardware in addition to its AI programming.

Musk stated in late 2017 that Tesla is building its own silicon for artificial-intelligence calculations, allowing the company to construct its own AI processors rather than depending on third-party sources like Nvidia.

Tesla's AI progress in autonomous driving has been marred by setbacks.

Tesla has consistently missed self-imposed deadlines, and serious accidents have been blamed on flaws in the vehicle's Autopilot mode, including a non-injury accident in 2018, in which the vehicle failed to detect a parked firetruck on a California freeway, and a fatal accident in 2018, in which the vehicle failed to detect a pedestrian outside a crosswalk.

Neuralink was established by Musk in 2016.

With the stated objective of helping humans to keep up with AI breakthroughs, Neuralink is focused on creating devices that can be implanted into the human brain to better facilitate communication between the brain and software.

Musk has characterized the gadgets as a more efficient interface with computer equipment, while people now operate things with their fingertips and voice commands, directives would instead come straight from the brain.

Though Musk has made major advances to AI, his pronouncements regarding the risks linked with AI have been apocalyptic.

Musk has called AI "humanity's greatest existential danger" and "the greatest peril we face as a civilisation" (McFarland 2014).

(Morris 2017).

He cautions against the perils of power concentration, a lack of independent control, and a competitive rush to acceptance without appropriate analysis of the repercussions.

While Musk has used colorful terminology such as "summoning the devil" (McFarland 2014) and depictions of cyborg overlords, he has also warned of more immediate and realistic concerns such as job losses and AI-driven misinformation campaigns.

Though Musk's statements might come out as alarmist, many important and well-respected figures, including as Microsoft cofounder Bill Gates, Swedish-American scientist Max Tegmark, and the late theoretical physicist Stephen Hawking, share his concern.

Furthermore, Musk does not call for the cessation of AI research.

Instead, Musk supports for responsible AI development and regulation, including the formation of a Congressional committee to spend years studying AI with the goal of better understanding the technology and its hazards before establishing suitable legal limits.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Bostrom, Nick; Superintelligence.


References & Further Reading:


Gates, Bill. (@BillGates). 2018. Twitter, June 26, 2018. https://twitter.com/BillGates/status/1011752221376036864.

Marr, Bernard. 2018. “The Amazing Ways Tesla Is Using Artificial Intelligence and Big Data.” Forbes, January 8, 2018. https://www.forbes.com/sites/bernardmarr/2018/01/08/the-amazing-ways-tesla-is-using-artificial-intelligence-and-big-data/.

McFarland, Matt. 2014. “Elon Musk: With Artificial Intelligence, We Are Summoning the Demon.” Washington Post, October 24, 2014. https://www.washingtonpost.com/news/innovations/wp/2014/10/24/elon-musk-with-artificial-intelligence-we-are-summoning-the-demon/.

Morris, David Z. 2017. “Elon Musk Says Artificial Intelligence Is the ‘Greatest Risk We Face as a Civilization.’” Fortune, July 15, 2017. https://fortune.com/2017/07/15/elon-musk-artificial-intelligence-2/.

Piper, Kelsey. 2018. “Why Elon Musk Fears Artificial Intelligence.” Vox Media, Novem￾ber 2, 2018. https://www.vox.com/future-perfect/2018/11/2/18053418/elon-musk-artificial-intelligence-google-deepmind-openai.

Strauss, Neil. 2017. “Elon Musk: The Architect of Tomorrow.” Rolling Stone, November 15, 2017. https://www.rollingstone.com/culture/culture-features/elon-musk-the-architect-of-tomorrow-120850/.



Analog Space Missions: Earth-Bound Training for Cosmic Exploration

What are Analog Space Missions? Analog space missions are a unique approach to space exploration, involving the simulation of extraterrestri...