Artificial Intelligence - Who Is Demis Hassabis (1976–)?




Demis Hassabis lives in the United Kingdom and works as a computer game programmer, cognitive scientist, and artificial intelligence specialist.

He is a cofounder of DeepMind, the company that created the AlphaGo deep learning engine.

Hassabis is well-known for being a skilled game player.

His passion for video games paved the way for his career as an artificial intelligence researcher and computer game entrepreneur.

Hassabis' parents noticed his chess prowess at a young age.

At the age of thirteen, he had achieved the status of chess master.

He's also a World Team Champion in the strategic board game Diplomacy, a World Series of Poker Main Event participant, and numerous World Pentamind and World Deca mentathlon Champions in the London Mind Sports Olympiad.

Hassabis began working at Bullfrog Games in Guildford, England, with renowned game designer Peter Molyneux when he was seventeen years old.

Bullfrog was notable for creating a variety of popular computer "god games." A god game is a computer-generated life simulation in which the user has power and influence over semiautonomous people in a diverse world.

Molyneux's Populous, published in 1989, is generally regarded as the first god game.

Has sabis co-designed and coded Theme Park, a simulation management game published by Bullfrog in 1994.

Hassabis dropped out of Bullfrog Games to pursue a degree at Queens' College, Cambridge.

In 1997, he earned a bachelor's degree in computer science.

Following graduation, Hassabis rejoined Molyneux at Lionhead Studios, a new gaming studio.

Hassabis worked on the artificial intelligence for the game Black & White, another god game in which the user reigned over a virtual island inhabited by different tribes, for a short time.

Hassabis departed Lionhead after a year to launch his own video game studio, Elixir Studios.

Hassabis has signed arrangements with major publishers such as Microsoft and Vivendi Universal.

Before closing in 2005, Elixir created a variety of games, including the diplomatic strategy simulation game Republic: The Revolution and the real-time strategy game Evil Genius.

Republic's artificial intelligence is modeled after Elias Canetti's 1960 book People and Authority, which explores problems concerning how and why crowds follow rulers' power (which Hassabis boiled down to force, money, and influence).

Republic required the daily programming efforts of twenty-five programmers over the course of four years.

Hassabis thought that the AI in the game would be valuable to academics.

Hassabis took a break from game creation to pursue additional studies at University College London (UCL).

In 2009, he received his PhD in Cognitive Neuroscience.

In his research of individuals with hippocampal injury, Hassabis revealed links between memory loss and poor imagination.

These findings revealed that the brain's memory systems may splice together recalled fragments of previous experiences to imagine hypothetical futures.

Hassabis continued his academic studies at the Gatsby Computational Neuroscience Unit at UCL and as a Wellcome Trust fellow for another two years.

He was also a visiting researcher at MIT and Harvard University.

Hassabis' cognitive science study influenced subsequent work on unsupervised learning, memory and one-shot learning, and imagination-based planning utilizing generic models in artificial intelligence.

With Shane Legg and Mustafa Suleyman, Hassabis cofounded the London-based AI start-up DeepMind Technologies in 2011.

The organization was focused on interdisciplinary science, bringing together premier academics and concepts from machine learning, neurology, engineering, and mathematics.

The mission of DeepMind was to create scientific breakthroughs in artificial intelligence and develop new artificial general-purpose learning capabilities.

Hassabis has compared the project to the Apollo Program for AI.

DeepMind was tasked with developing a computer capable of defeating human opponents in the abstract strategic board game Go.

Hassabis didn't want to build an expert system, a brute-force computer preprogrammed with Go-specific algorithms and heuristics.

Rather than the chess-playing single-purpose Deep Blue system, he intended to construct a computer that adapted to play ing games in ways comparable to human chess champ Garry Kasparov.

He sought to build a machine that could learn to deal with new issues and have universality, which he defined as the ability to do a variety of jobs.

The reinforcement learning architecture was used by the company's AlphaGo artificial intelligence agent, which was built to compete against Lee Sedol, an eighteen-time world champion Go player.

Agents in the environment (in this example, the Go board) aim to attain a certain objective via reinforcement learning (winning the game).

The agents have perceptual inputs (such as visual input) as well as a statistical model based on environmental data.

The agent creates plans and goes through simulations of actions that will modify the model in order to accomplish the objective while collecting perceptual input and developing a representation of its surroundings.

The agent is always attempting to choose behaviors that will get it closer to its goal.

Hassabis argues that resolving all of the issues of goal-oriented agents in a reinforcement learning framework would be adequate to fulfill artificial general intelligence's promise.

He claims that biological systems work in a similar manner.

The dopamine system in human brains is responsible for implementing a reinforcement learning framework.

To master the game of Go, it usually takes a lifetime of study and practice.

Go includes a significantly broader search area than chess.

On the board, there are more potential Go locations than there are atoms in the cosmos.

It is also thought to be almost hard to develop an evaluation function that covers a significant portion of those places in order to determine where the next stone should be placed on the board.

Each game is essentially unique, and exceptional players describe their decisions as being guided by intuition rather than logic.

AlphaGo addressed these obstacles by leveraging data gathered from thousands of strong amateur games played by human Go players to train a neural network.

After that, AlphaGo played millions of games against itself, predicting how probable each side was to win based on the present board positions.

No specific assessment standards were required in this manner.

In Seoul, South Korea, in 2006, AlphaGo beat Go champion Lee Sedol (four games to one).

The way AlphaGo plays is considered cautious.

It favors diagonal stone placements known as "shoulder hits" to enhance victory while avoiding risk or point spread—thus putting less apparent focus on achieving territorial gains on the board.

In order to play any two-person game, AlphaGo has subsequently been renamed AlphaZero.

Without any human training data or sample games, AlphaZero learns from begin.

It only learns from random play.

After just four hours of training, AlphaZero destroyed Stock fish, one of the best free and open-source chess engines (28 games to 0 with 72 draws).

AlphaZero prefers the mobility of the pieces above their materiality while playing chess, which results in a creative style of play (similar to Go).

Another task the business took on was to develop a versatile, adaptable, and durable AI that could teach itself how to play more than 50 Atari video games just by looking at the pixels and scores on a video screen.

Hassabis introduced deep reinforcement learning, which combines reinforcement learning and deep learning, for this difficulty.

To create a neural network capable of reliable perceptual identification, deep neural networks need an input layer of observations, weighting mechanisms, and backpropagation.

In the instance of the Atari challenge, the network was trained using the 20,000-pixel values that flashed on the videogame screen at any given time.

Under deep learning, reinforcement learning takes the machine from the point where it perceives and recognizes a given input to the point where it can take meaningful action toward a goal.

In the Atari challenge, the computer learnt how to win over hundreds of hours of playtime by doing eighteen distinct exact joystick actions in a certain time-step.

To put it another way, a deep reinforcement learning machine is an end-to-end learning system capable of analyzing perceptual inputs, devising a strategy, and executing the strategy from start.

DeepMind was purchased by Google in 2014.

Hassabis continues to work at Google with DeepMind's deep learning technology.

Optical coherence tomography scans for eye disorders are used in one of these attempts.

By triaging patients and proposing how they should be referred for further treatment, DeepMind's AI system can swiftly and reliably diagnose from eye scans.

AlphaFold is a machine learning, physics, and structural biology system that predicts three-dimensional protein structures simply based on its genetic sequence.

AlphaFold took first place in the 2018 "world championship" for Critical Assessment of Techniques for Protein Structure Prediction, successfully predicting the most accurate structure for 25 of 43 proteins.

AlphaStar is currently mastering the real-time strategy game StarCraft II. 



Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


Deep Learning.



Further Reading:


“Demis Hassabis, Ph.D.: Pioneer of Artificial Intelligence.” 2018. Biography and interview. American Academy of Achievement. https://www.achievement.org/achiever/demis-hassabis-ph-d/.

Ford, Martin. 2018. Architects of Intelligence: The Truth about AI from the People Building It. Birmingham, UK: Packt Publishing Limited.

Gibney, Elizabeth. 2015. “DeepMind Algorithm Beats People at Classic Video Games.” Nature 518 (February 26): 465–66.

Gibney, Elizabeth. 2016. “Google AI Algorithm Masters Ancient Game of Go.” Nature 529 (January 27): 445–46.

Proudfoot, Kevin, Josh Rosen, Gary Krieg, and Greg Kohs. 2017. AlphaGo. Roco Films.


Analog Space Missions: Earth-Bound Training for Cosmic Exploration

What are Analog Space Missions? Analog space missions are a unique approach to space exploration, involving the simulation of extraterrestri...