Showing posts sorted by date for query Elon Musk. Sort by relevance Show all posts
Showing posts sorted by date for query Elon Musk. Sort by relevance Show all posts

What Is Artificial General Intelligence?



Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the AGI system to solve problems when presented with new tasks. 

In other words, it's AI's capacity to learn similarly to humans.



Strong AI, full AI, and general intelligent action are some names for it. 

The phrase "strong AI," however, is only used in few academic publications to refer to computer systems that are sentient or aware. 

These definitions may change since specialists from many disciplines see human intelligence from various angles. 

For instance, computer scientists often characterize human intelligence as the capacity to accomplish objectives. 

On the other hand, general intelligence is defined by psychologists in terms of survival or adaptation.

Weak or narrow AI, in contrast to strong AI, is made up of programs created to address a single issue and lacks awareness since it is not meant to have broad cognitive capacities. 

Autonomous cars and IBM's Watson supercomputer are two examples. 

Nevertheless, AGI is defined in computer science as an intelligent system having full or comprehensive knowledge as well as cognitive computing skills.



As of right now, there are no real AGI systems; they are still the stuff of science fiction. 

The long-term objective of these systems is to perform as well as humans do. 

However, due to AGI's superior capacity to acquire and analyze massive amounts of data at a far faster rate than the human mind, it may be possible for AGI to be more intelligent than humans.



Artificial intelligence (AI) is now capable of carrying out a wide range of functions, including providing tailored suggestions based on prior web searches. 

Additionally, it can recognize various items for autonomous cars to avoid, recognize malignant cells during medical inspections, and serve as the brain of home automation. 

Additionally, it may be utilized to find possibly habitable planets, act as intelligent assistants, be in charge of security, and more.



Naturally, AGI seems to far beyond such capacities, and some scientists are concerned this may result in a dystopian future

Elon Musk said that sentient AI would be more hazardous than nuclear war, while Stephen Hawking advised against its creation because it would see humanity as a possible threat and act accordingly.


Despite concerns, most scientists agree that genuine AGI is decades or perhaps centuries away from being developed and must first meet a number of requirements (which are always changing) in order to be achieved. 

These include the capacity for logic, tact, puzzle-solving, and making decisions in the face of ambiguity. 



Additionally, it must be able to plan, learn, and communicate in natural language, as well as represent information, including common sense. 

AGI must also have the capacity to detect (hear, see, etc.) and output the ability to act, such as moving items and switching places to explore. 



How far along are we in the process of developing artificial general intelligence, and who is involved?

In accordance with a 2020 study from the Global Catastrophic Risk Institute (GCRI), academic institutions, businesses, and different governmental agencies are presently working on 72 recognized AGI R&D projects. 



According to the poll, projects nowadays are often smaller, more geographically diversified, less open-source, more focused on humanitarian aims than academic ones, and more centered in private firms than projects in 2017. 

The comparison also reveals a decline in projects with academic affiliations, an increase in projects sponsored by corporations, a rise in projects with a humanitarian emphasis, a decline in programs with ties to the military, and a decline in US-based initiatives.


In AGI R&D, particularly military initiatives that are solely focused on fundamental research, governments and organizations have very little roles to play. 

However, recent programs seem to be more varied and are classified using three criteria, including business projects that are engaged in AGI safety and have humanistic end objectives. 

Additionally, it covers tiny private enterprises with a variety of objectives including academic programs that do not concern themselves with AGI safety but rather the progress of knowledge.

One of the most well-known organizations working on AGI is Carnegie Mellon University, which has a project called ACT-R that aims to create a generic cognitive architecture based on the basic cognitive and perceptual functions that support the human mind. 

The project may be thought of as a method of describing how the brain is structured such that different processing modules can result in cognition.


Another pioneering organization testing the limits of AGI is Microsoft Research AI, which has carried out a number of research initiatives, including developing a data set to counter prejudice for machine-learning models. 

The business is also investigating ways to advance moral AI, create a responsible AI standard, and create AI strategies and evaluations to create a framework that emphasizes the advancement of mankind.


The person behind the well-known video game franchises Commander Keen and Doom has launched yet another intriguing endeavor. 

Keen Technologies, John Carmack's most recent business, is an AGI development company that has already raised $20 million in funding from former GitHub CEO Nat Friedman and Cue founder Daniel Gross. 

Carmack is one of the AGI optimists who believes that it would ultimately help mankind and result in the development of an AI mind that acts like a human, which might be used as a universal remote worker.


So what does AGI's future hold? 

The majority of specialists are doubtful that AGI will ever be developed, and others believe that the urge to even develop artificial intelligence comparable to humans will eventually go away. 

Others are working to develop it so that everyone will benefit.

Nevertheless, the creation of AGI is still in the planning stages, and in the next decades, little progress is anticipated. 

Nevertheless, throughout history, scientists have debated whether developing technologies with the potential to change people's lives will benefit society as a whole or endanger it. 

This proposal was considered before to the invention of the vehicle, during the development of AC electricity, and when the atomic bomb was still only a theory.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.

Be sure to refer to the complete & active AI Terms Glossary here.


AI - What Is Superintelligence AI? Is Artificial Superintelligence Possible?

 


 

In its most common use, the phrase "superintelligence" refers to any degree of intelligence that at least equals, if not always exceeds, human intellect, in a broad sense.


Though computer intelligence has long outperformed natural human cognitive capacity in specific tasks—for example, a calculator's ability to swiftly interpret algorithms—these are not often considered examples of superintelligence in the strict sense due to their limited functional range.


In this sense, superintelligence would necessitate, in addition to artificial mastery of specific theoretical tasks, some kind of additional mastery of what has traditionally been referred to as practical intelligence: a generalized sense of how to subsume particulars into universal categories that are in some way worthwhile.


To this day, no such generalized superintelligence has manifested, and hence all discussions of superintelligence remain speculative to some degree.


Whereas traditional theories of superintelligence have been limited to theoretical metaphysics and theology, recent advancements in computer science and biotechnology have opened up the prospect of superintelligence being materialized.

Although the timing of such evolution is hotly discussed, a rising body of evidence implies that material superintelligence is both possible and likely.


If this hypothesis is proved right, it will very certainly be the result of advances in one of two major areas of AI research


  1. Bioengineering 
  2. Computer science





The former involves efforts to not only map out and manipulate the human DNA, but also to exactly copy the human brain electronically through full brain emulation, also known as mind uploading.


The first of these bioengineering efforts is not new, with eugenics programs reaching back to the seventeenth century at the very least.

Despite the major ethical and legal issues that always emerge as a result of such efforts, the discovery of DNA in the twentieth century, together with advances in genome mapping, has rekindled interest in eugenics.

Much of this study is aimed at gaining a better understanding of the human brain's genetic composition in order to manipulate DNA code in the direction of superhuman intelligence.



Uploading is a somewhat different, but still biologically based, approach to superintelligence that aims to map out neural networks in order to successfully transfer human intelligence onto computer interfaces.


  • The brains of insects and tiny animals are micro-dissected and then scanned for thorough computer analysis in this relatively new area of study.
  • The underlying premise of whole brain emulation is that if the brain's structure is better known and mapped, it may be able to copy it with or without organic brain tissue.



Despite the fast growth of both genetic mapping and whole brain emulation, both techniques have significant limits, making it less likely that any of these biological approaches will be the first to attain superintelligence.





The genetic alteration of the human genome, for example, is constrained by generational constraints.

Even if it were now feasible to artificially boost cognitive functioning by modifying the DNA of a human embryo (which is still a long way off), it would take an entire generation for the changed embryo to evolve into a fully fledged, superintelligent human person.

This would also imply that there are no legal or moral barriers to manipulating the human DNA, which is far from the fact.

Even the comparatively minor genetic manipulation of human embryos carried done by a Chinese physician as recently as November 2018 sparked international outrage (Ramzy and Wee 2019).



Whole brain emulation, on the other hand, is still a long way off, owing to biotechnology's limits.


Given the current medical technology, the extreme levels of accuracy necessary at every step of the uploading process are impossible to achieve.

Science and technology currently lack the capacity to dissect and scan human brain tissue with sufficient precision to produce full brain simulation results.

Furthermore, even if such first steps are feasible, researchers would face significant challenges in analyzing and digitally replicating the human brain using cutting-edge computer technology.




Many analysts believe that such constraints will be overcome, although the timeline for such realizations is unknown.



Apart from biotechnology, the area of AI, which is strictly defined as any type of nonorganic (particularly computer-based) intelligence, is the second major path to superintelligence.

Of course, the work of creating a superintelligent AI from the ground up is complicated by a number of elements, not all of which are purely logistical in nature, such as processing speed, hardware/software design, finance, and so on.

In addition to such practical challenges, there is a significant philosophical issue: human programmers are unable to know, and so cannot program, that which is superior to their own intelligence.





Much contemporary research on computer learning and interest in the notion of a seed AI is motivated in part by this worry.


Any machine capable of changing reactions to stimuli based on an examination of how well it performs in relation to a predetermined objective is defined as the latter.

Importantly, the concept of a seed AI entails not only the capacity to change its replies by extending its base of content knowledge (stored information), but also the ability to change the structure of its programming to better fit a specific job (Bostrom 2017, 29).

Indeed, it is this latter capability that would give a seed AI what Nick Bostrom refers to as "recursive self-improvement," or the ability to evolve iteratively (Bostrom 2017, 29).

This would eliminate the requirement for programmers to have an a priori vision of super intelligence since the seed AI would constantly enhance its own programming, with each more intelligent iteration writing a superior version of itself (beyond the human level).

Such a machine would undoubtedly cast doubt on the conventional philosophical assumption that robots are incapable of self-awareness.

This perspective's proponents may be traced all the way back to Descartes, but they also include more current thinkers like John Haugeland and John Searle.



Machine intelligence, in this perspective, is defined as the successful correlation of inputs with outputs according to a predefined program.




As a result, robots differ from humans in type, the latter being characterized only by conscious self-awareness.

Humans are supposed to comprehend the activities they execute, but robots are thought to carry out functions mindlessly—that is, without knowing how they work.

Should it be able to construct a successful seed AI, this core idea would be forced to be challenged.

The seed AI would demonstrate a level of self-awareness and autonomy not readily explained by the Cartesian philosophical paradigm by upgrading its own programming in ways that surprise and defy the forecasts of its human programmers.

Indeed, although it is still speculative (for the time being), the increasingly possible result of superintelligent AI poses a slew of moral and legal dilemmas that have sparked a lot of philosophical discussion in this subject.

The main worries are about the human species' security in the case of what Bostrom refers to as a "intelligence explosion"—that is, the creation of a seed AI followed by a possibly exponential growth in intellect (Bostrom 2017).



One of the key problems is the inherently unexpected character of such a result.


Humans will not be able to totally foresee how superintelligent AI would act due to the autonomy entailed by superintelligence in a definitional sense.

Even in the few cases of specialized superintelligence that humans have been able to construct and study so far—for example, robots that have surpassed humans in strategic games like chess and Go—human forecasts for AI have shown to be very unreliable.

For many critics, such unpredictability is a significant indicator that, should more generic types of superintelligent AI emerge, humans would swiftly lose their capacity to manage them (Kissinger 2018).





Of all, such a loss of control does not automatically imply an adversarial relationship between humans and superintelligence.


Indeed, although most of the literature on superintelligence portrays this relationship as adversarial, some new work claims that this perspective reveals a prejudice against machines that is particularly prevalent in Western cultures (Knight 2014).

Nonetheless, there are compelling grounds to believe that superintelligent AI would at the very least consider human goals as incompatible with their own, and may even regard humans as existential dangers.

For example, computer scientist Steve Omohundro has claimed that even a relatively basic kind of superintelligent AI like a chess bot would have motive to want the extinction of humanity as a whole—and may be able to build the tools to do it (Omohundro 2014).

Similarly, Bostrom has claimed that a superintelligence explosion would most certainly result in, if not the extinction of the human race, then at the very least a gloomy future (Bostrom 2017).

Whatever the benefits of such theories, the great uncertainty entailed by superintelligence is obvious.

If there is one point of agreement in this large and diverse literature, it is that if AI research is to continue, the global community must take great care to protect its interests.





Hardened determinists who claim that technological advancement is so tightly connected to inflexible market forces that it is simply impossible to change its pace or direction in any major manner may find this statement contentious.


According to this determinist viewpoint, if AI can deliver cost-cutting solutions for industry and commerce (as it has already started to do), its growth will proceed into the realm of superintelligence, regardless of any unexpected negative repercussions.

Many skeptics argue that growing societal awareness of the potential risks of AI, as well as thorough political monitoring of its development, are necessary counterpoints to such viewpoints.


Bostrom highlights various examples of effective worldwide cooperation in science and technology as crucial precedents that challenge the determinist approach, including CERN, the Human Genome Project, and the International Space Station (Bostrom 2017, 253).

To this, one may add examples from the worldwide environmental movement, which began in the 1960s and 1970s and has imposed significant restrictions on pollution committed in the name of uncontrolled capitalism (Feenberg 2006).



Given the speculative nature of superintelligence research, it is hard to predict what the future holds.

However, if superintelligence poses an existential danger to human existence, caution would dictate that a worldwide collaborative strategy rather than a free market approach to AI be used.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Berserkers; Bostrom, Nick; de Garis, Hugo; General and Narrow AI; Goertzel, Ben; Kurzweil, Ray; Moravec, Hans; Musk, Elon; Technological Singularity; Yudkowsky, Eliezer.



References & Further Reading:


  • Bostrom, Nick. 2017. Superintelligence: Paths, Dangers, Strategies. Oxford, UK: Oxford University Press.
  • Feenberg, Andrew. 2006. “Environmentalism and the Politics of Technology.” In Questioning Technology, 45–73. New York: Routledge.
  • Kissinger, Henry. 2018. “How the Enlightenment Ends.” The Atlantic, June 2018. https://www.theatlantic.com/magazine/archive/2018/06/henry-kissinger-ai-could-mean-the-end-of-human-history/559124/.
  • Knight, Heather. 2014. How Humans Respond to Robots: Building Public Policy Through Good Design. Washington, DC: The Project on Civilian Robotics. Brookings Institution.
  • Omohundro, Steve. 2014. “Autonomous Technology and the Greater Human Good.” Journal of Experimental & Theoretical Artificial Intelligence 26, no. 3: 303–15.
  • Ramzy, Austin, and Sui-Lee Wee. 2019. “Scientist Who Edited Babies’ Genes Is Likely to Face Charges in China.” The New York Times, January 21, 2019



Artificial Intelligence - History And Timeline

     




    1942

    The Three Laws of Robotics by science fiction author Isaac Asimov occur in the short tale "Runaround."


    1943


    Emil Post, a mathematician, talks about "production systems," a notion he adopted for the 1957 General Problem Solver.


    1943


    "A Logical Calculus of the Ideas of Immanent in Nervous Activity," a study by Warren McCulloch and Walter Pitts on a computational theory of neural networks, is published.


    1944


    The Teleological Society was founded by John von Neumann, Norbert Wiener, Warren McCulloch, Walter Pitts, and Howard Aiken to explore, among other things, nervous system communication and control.


    1945


    In his book How to Solve It, George Polya emphasizes the importance of heuristic thinking in issue solving.


    1946


    In New York City, the first of eleven Macy Conferences on Cybernetics gets underway. "Feedback Mechanisms and Circular Causal Systems in Biological and Social Systems" is the focus of the inaugural conference.



    1948


    Norbert Wiener, a mathematician, publishes Cybernetics, or Control and Communication in the Animal and the Machine.


    1949


    In his book The Organization of Behavior, psychologist Donald Hebb provides a theory for brain adaptation in human education: "neurons that fire together connect together."


    1949


    Edmund Berkeley's book Giant Brains, or Machines That Think, is published.


    1950


    Alan Turing's "Computing Machinery and Intelligence" describes the Turing Test, which attributes intelligence to any computer capable of demonstrating intelligent behavior comparable to that of a person.


    1950


    Claude Shannon releases "Programming a Computer for Playing Chess," a groundbreaking technical study that shares search methods and strategies.



    1951


    Marvin Minsky, a math student, and Dean Edmonds, a physics student, create an electronic rat that can learn to navigate a labyrinth using Hebbian theory.


    1951


    John von Neumann, a mathematician, releases "General and Logical Theory of Automata," which reduces the human brain and central nervous system to a computer.


    1951


    For the University of Manchester's Ferranti Mark 1 computer, Christopher Strachey produces a checkers software and Dietrich Prinz creates a chess routine.


    1952


    Cyberneticist W. Edwards wrote Design for a Brain: The Origin of Adaptive Behavior, a book on the logical underpinnings of human brain function. Ross Ashby is a British actor.


    1952


    At Cornell University Medical College, physiologist James Hardy and physician Martin Lipkin begin developing a McBee punched card system for mechanical diagnosis of patients.


    1954


    Science-Fiction Thinking Machines: Robots, Androids, Computers, edited by Groff Conklin, is a theme-based anthology.


    1954


    The Georgetown-IBM project exemplifies the power of text machine translation.


    1955


    Under the direction of economist Herbert Simon and graduate student Allen Newell, artificial intelligence research began at Carnegie Tech (now Carnegie Mellon University).


    1955


    For Scientific American, mathematician John Kemeny wrote "Man as a Machine."


    1955


    In a Rockefeller Foundation proposal for a Dartmouth University meeting, mathematician John McCarthy coined the phrase "artificial intelligence."



    1956


    Allen Newell, Herbert Simon, and Cliff Shaw created Logic Theorist, an artificial intelligence computer software for proving theorems in Alfred North Whitehead and Bertrand Russell's Principia Mathematica.


    1956


    The "Constitutional Convention of AI," a Dartmouth Summer Research Project, brings together specialists in cybernetics, automata, information theory, operations research, and game theory.


    1956


    On television, electrical engineer Arthur Samuel shows off his checkers-playing AI software.


    1957


    Allen Newell and Herbert Simon created the General Problem Solver AI software.


    1957


    The Rockefeller Medical Electronics Center shows how an RCA Bizmac computer application might help doctors distinguish between blood disorders.


    1958


    The Computer and the Brain, an unfinished work by John von Neumann, is published.


    1958


    At the "Mechanisation of Thought Processes" symposium at the UK's Teddington National Physical Laboratory, Firmin Nash delivers the Group Symbol Associator its first public demonstration.


    1958


    For linear data categorization, Frank Rosenblatt develops the single layer perceptron, which includes a neural network and supervised learning algorithm.


    1958


    The high-level programming language LISP is specified by John McCarthy of the Massachusetts Institute of Technology (MIT) for AI research.


    1959


    "The Reasoning Foundations of Medical Diagnosis," written by physicist Robert Ledley and radiologist Lee Lusted, presents Bayesian inference and symbolic logic to medical difficulties.


    1959


    At MIT, John McCarthy and Marvin Minsky create the Artificial Intelligence Laboratory.


    1960


    James L. Adams, an engineering student, built the Stanford Cart, a remote control vehicle with a television camera.


    1962


    In his short novel "Without a Thought," science fiction and fantasy author Fred Saberhagen develops sentient killing robots known as Berserkers.


    1963


    John McCarthy developed the Stanford Artificial Intelligence Laboratory (SAIL).


    1963


    Under Project MAC, the Advanced Research Experiments Agency of the United States Department of Defense began financing artificial intelligence projects at MIT.


    1964


    Joseph Weizenbaum of MIT created ELIZA, the first software allowing natural language conversation with a computer (a "chatbot").


    1965


    I am a statistician from the United Kingdom. J. Good's "Speculations Concerning the First Ultraintelligent Machine," which predicts an impending intelligence explosion, is published.


    1965


    Hubert L. Dreyfus and Stuart E. Dreyfus, philosophers and mathematicians, publish "Alchemy and AI," a study critical of artificial intelligence.


    1965


    Joshua Lederberg and Edward Feigenbaum founded the Stanford Heuristic Programming Project, which aims to model scientific reasoning and create expert systems.


    1965


    Donald Michie is the head of Edinburgh University's Department of Machine Intelligence and Perception.


    1965


    Georg Nees organizes the first generative art exhibition, Computer Graphic, in Stuttgart, West Germany.


    1965


    With the expert system DENDRAL, computer scientist Edward Feigenbaum starts a ten-year endeavor to automate the chemical analysis of organic molecules.


    1966


    The Automatic Language Processing Advisory Committee (ALPAC) issues a cautious assessment on machine translation's present status.


    1967


    On a DEC PDP-6 at MIT, Richard Greenblatt finishes work on Mac Hack, a computer that plays competitive tournament chess.


    1967


    Waseda University's Ichiro Kato begins work on the WABOT project, which culminates in the unveiling of a full-scale humanoid intelligent robot five years later.


    1968


    Stanley Kubrick's adaptation of Arthur C. Clarke's science fiction novel 2001: A Space Odyssey, about the artificially intelligent computer HAL 9000, is one of the most influential and highly praised films of all time.


    1968


    At MIT, Terry Winograd starts work on SHRDLU, a natural language understanding program.


    1969


    Washington, DC hosts the First International Joint Conference on Artificial Intelligence (IJCAI).


    1972


    Artist Harold Cohen develops AARON, an artificial intelligence computer that generates paintings.


    1972


    Ken Colby describes his efforts using the software program PARRY to simulate paranoia.


    1972


    In What Computers Can't Do, Hubert Dreyfus offers his criticism of artificial intelligence's intellectual basis.


    1972


    Ted Shortliffe, a doctorate student at Stanford University, has started work on the MYCIN expert system, which is aimed to identify bacterial illnesses and provide treatment alternatives.


    1972


    The UK Science Research Council releases the Lighthill Report on Artificial Intelligence, which highlights AI technological shortcomings and the challenges of combinatorial explosion.


    1972


    The Assault on Privacy: Computers, Data Banks, and Dossiers, by Arthur Miller, is an early study on the societal implications of computers.


    1972


    INTERNIST-I, an internal medicine expert system, is being developed by University of Pittsburgh physician Jack Myers, medical student Randolph Miller, and computer scientist Harry Pople.


    1974


    Paul Werbos, a social scientist, has completed his dissertation on a backpropagation algorithm that is currently extensively used in artificial neural network training for supervised learning applications.


    1974


    The memo discusses the notion of a frame, a "remembered framework" that fits reality by "changing detail as appropriate." Marvin Minsky distributes MIT AI Lab document 306 on "A Framework for Representing Knowledge."


    1975


    The phrase "genetic algorithm" is used by John Holland to explain evolutionary strategies in natural and artificial systems.


    1976


    In Computer Power and Human Reason, computer scientist Joseph Weizenbaum expresses his mixed feelings on artificial intelligence research.


    1978


    At Rutgers University, EXPERT, a generic knowledge representation technique for constructing expert systems, becomes live.


    1978


    Joshua Lederberg, Douglas Brutlag, Edward Feigenbaum, and Bruce Buchanan started the MOLGEN project at Stanford to solve DNA structures generated from segmentation data in molecular genetics research.


    1979


    Raj Reddy, a computer scientist at Carnegie Mellon University, founded the Robotics Institute.


    1979


    While working with a robot, the first human is slain.


    1979


    Hans Moravec rebuilds and equips the Stanford Cart with a stereoscopic vision system after it has evolved into an autonomous rover over almost two decades.


    1980


    The American Association of Artificial Intelligence (AAAI) holds its first national conference at Stanford University.


    1980


    In his Chinese Room argument, philosopher John Searle claims that a computer's modeling of action does not establish comprehension, intentionality, or awareness.


    1982


    Release of Blade Runner, a science fiction picture based on Philip K. Dick's tale Do Androids Dream of Electric Sheep? (1968).


    1982


    The associative brain network, initially developed by William Little in 1974, is popularized by physicist John Hopfield.


    1984


    In Fortune Magazine, Tom Alexander writes "Why Computers Can't Outthink the Experts."


    1984


    At the Microelectronics and Computer Consortium (MCC) in Austin, TX, computer scientist Doug Lenat launches the Cyc project, which aims to create a vast commonsense knowledge base and artificial intelligence architecture.


    1984


    Orion Pictures releases the first Terminator picture, which features robotic assassins from the future and an AI known as Skynet.


    1986


    Honda establishes a research facility to build humanoid robots that can cohabit and interact with humans.


    1986


    Rodney Brooks, an MIT roboticist, describes the subsumption architecture for behavior-based robots.


    1986


    The Society of Mind is published by Marvin Minsky, who depicts the brain as a collection of collaborating agents.


    1989


    The MIT Artificial Intelligence Lab's Rodney Brooks and Anita Flynn publish "Fast, Cheap, and Out of Control: A Robot Invasion of the Solar System," a paper discussing the possibility of sending small robots on interplanetary exploration missions.


    1993


    The Cog interactive robot project is launched at MIT by Rodney Brooks, Lynn Andrea Stein, Cynthia Breazeal, and others.


    1995


    The phrase "generative music" was used by musician Brian Eno to describe systems that create ever-changing music by modifying parameters over time.


    1995


    The MQ-1 Predator unmanned aerial aircraft from General Atomics has entered US military and reconnaissance duty.


    1997


    Under normal tournament settings, IBM's Deep Blue supercomputer overcomes reigning chess champion Garry Kasparov.


    1997


    In Nagoya, Japan, the inaugural RoboCup, an international tournament featuring over forty teams of robot soccer players, takes place.


    1997


    NaturallySpeaking is Dragon Systems' first commercial voice recognition software product.


    1999


    Sony introduces AIBO, a robotic dog, to the general public.


    2000


    The Advanced Step in Innovative Mobility humanoid robot, ASIMO, is unveiled by Honda.


    2001


    At Super Bowl XXXV, Visage Corporation unveils the FaceFINDER automatic face-recognition technology.


    2002


    The Roomba autonomous household vacuum cleaner is released by the iRobot Corporation, which was created by Rodney Brooks, Colin Angle, and Helen Greiner.


    2004


    In the Mojave Desert near Primm, NV, DARPA hosts its inaugural autonomous vehicle Grand Challenge, but none of the cars complete the 150-mile route.


    2005


    Under the direction of neurologist Henry Markram, the Swiss Blue Brain Project is formed to imitate the human brain.


    2006


    Netflix is awarding a $1 million prize to the first programming team to create the greatest recommender system based on prior user ratings.


    2007


    DARPA has announced the commencement of the Urban Challenge, an autonomous car competition that will test merging, passing, parking, and navigating traffic and junctions.


    2009


    Under the leadership of Sebastian Thrun, Google launches its self-driving car project (now known as Waymo) in the San Francisco Bay Area.


    2009


    Fei-Fei Li of Stanford University describes her work on ImageNet, a library of millions of hand-annotated photographs used to teach AIs to recognize the presence or absence of items visually.


    2010


    Human manipulation of automated trading algorithms causes a "flash collapse" in the US stock market.


    2011


    Demis Hassabis, Shane Legg, and Mustafa Suleyman developed DeepMind in the United Kingdom to educate AIs how to play and succeed at classic video games.


    2011


    Watson, IBM's natural language computer system, has beaten Jeopardy! Ken Jennings and Brad Rutter are the champions.


    2011


    The iPhone 4S comes with Apple's mobile suggestion assistant Siri.


    2011


    Andrew Ng, a computer scientist, and Google colleagues Jeff Dean and Greg Corrado have launched an informal Google Brain deep learning research cooperation.


    2013


    The European Union's Human Brain Project aims to better understand how the human brain functions and to duplicate its computing capabilities.


    2013


    Stop Killer Robots is a campaign launched by Human Rights Watch.


    2013


    Spike Jonze's science fiction drama Her has been released. A guy and his AI mobile suggestion assistant Samantha fall in love in the film.


    2014


    Ian Goodfellow and colleagues at the University of Montreal create Generative Adversarial Networks (GANs) for use in deep neural networks, which are beneficial in making realistic fake human photos.


    2014


    Eugene Goostman, a chatbot that plays a thirteen-year-old kid, is said to have passed a Turing-like test.


    2014


    According to physicist Stephen Hawking, the development of AI might lead to humanity's extinction.


    2015


    DeepFace is a deep learning face recognition system that Facebook has released on its social media platform.


    2016


    In a five-game battle, DeepMind's AlphaGo software beats Lee Sedol, a 9th dan Go player.


    2016


    Tay, a Microsoft AI chatbot, has been put on Twitter, where users may teach it to send abusive and inappropriate posts.


    2017


    The Asilomar Meeting on Beneficial AI is hosted by the Future of Life Institute.


    2017


    Anthony Levandowski, an AI self-driving start-up engineer, formed the Way of the Future church with the goal of creating a superintelligent robot god.


    2018


    Google has announced Duplex, an AI program that uses natural language to schedule appointments over the phone.


    2018


    The General Data Protection Regulation (GDPR) and "Ethics Guidelines for Trustworthy AI" are published by the European Union.


    2019


    A lung cancer screening AI developed by Google AI and Northwestern Medicine in Chicago, IL, surpasses specialized radiologists.


    2019


    Elon Musk cofounded OpenAI, which generates realistic tales and journalism via artificial intelligence text generation. Because of its ability to spread false news, it was previously judged "too risky" to utilize.


    2020


    TensorFlow Quantum, an open-source framework for quantum machine learning, was announced by Google AI in conjunction with the University of Waterloo, the "moonshot faculty" X, and Volkswagen.




    ~ Jai Krishna Ponnappan

    Find Jai on Twitter | LinkedIn | Instagram


    You may also want to read more about Artificial Intelligence here.










    Analog Space Missions: Earth-Bound Training for Cosmic Exploration

    What are Analog Space Missions? Analog space missions are a unique approach to space exploration, involving the simulation of extraterrestri...