Showing posts with label Demis Hassabis. Show all posts
Showing posts with label Demis Hassabis. Show all posts

Artificial Intelligence - Who Is Demis Hassabis (1976–)?




Demis Hassabis lives in the United Kingdom and works as a computer game programmer, cognitive scientist, and artificial intelligence specialist.

He is a cofounder of DeepMind, the company that created the AlphaGo deep learning engine.

Hassabis is well-known for being a skilled game player.

His passion for video games paved the way for his career as an artificial intelligence researcher and computer game entrepreneur.

Hassabis' parents noticed his chess prowess at a young age.

At the age of thirteen, he had achieved the status of chess master.

He's also a World Team Champion in the strategic board game Diplomacy, a World Series of Poker Main Event participant, and numerous World Pentamind and World Deca mentathlon Champions in the London Mind Sports Olympiad.

Hassabis began working at Bullfrog Games in Guildford, England, with renowned game designer Peter Molyneux when he was seventeen years old.

Bullfrog was notable for creating a variety of popular computer "god games." A god game is a computer-generated life simulation in which the user has power and influence over semiautonomous people in a diverse world.

Molyneux's Populous, published in 1989, is generally regarded as the first god game.

Has sabis co-designed and coded Theme Park, a simulation management game published by Bullfrog in 1994.

Hassabis dropped out of Bullfrog Games to pursue a degree at Queens' College, Cambridge.

In 1997, he earned a bachelor's degree in computer science.

Following graduation, Hassabis rejoined Molyneux at Lionhead Studios, a new gaming studio.

Hassabis worked on the artificial intelligence for the game Black & White, another god game in which the user reigned over a virtual island inhabited by different tribes, for a short time.

Hassabis departed Lionhead after a year to launch his own video game studio, Elixir Studios.

Hassabis has signed arrangements with major publishers such as Microsoft and Vivendi Universal.

Before closing in 2005, Elixir created a variety of games, including the diplomatic strategy simulation game Republic: The Revolution and the real-time strategy game Evil Genius.

Republic's artificial intelligence is modeled after Elias Canetti's 1960 book People and Authority, which explores problems concerning how and why crowds follow rulers' power (which Hassabis boiled down to force, money, and influence).

Republic required the daily programming efforts of twenty-five programmers over the course of four years.

Hassabis thought that the AI in the game would be valuable to academics.

Hassabis took a break from game creation to pursue additional studies at University College London (UCL).

In 2009, he received his PhD in Cognitive Neuroscience.

In his research of individuals with hippocampal injury, Hassabis revealed links between memory loss and poor imagination.

These findings revealed that the brain's memory systems may splice together recalled fragments of previous experiences to imagine hypothetical futures.

Hassabis continued his academic studies at the Gatsby Computational Neuroscience Unit at UCL and as a Wellcome Trust fellow for another two years.

He was also a visiting researcher at MIT and Harvard University.

Hassabis' cognitive science study influenced subsequent work on unsupervised learning, memory and one-shot learning, and imagination-based planning utilizing generic models in artificial intelligence.

With Shane Legg and Mustafa Suleyman, Hassabis cofounded the London-based AI start-up DeepMind Technologies in 2011.

The organization was focused on interdisciplinary science, bringing together premier academics and concepts from machine learning, neurology, engineering, and mathematics.

The mission of DeepMind was to create scientific breakthroughs in artificial intelligence and develop new artificial general-purpose learning capabilities.

Hassabis has compared the project to the Apollo Program for AI.

DeepMind was tasked with developing a computer capable of defeating human opponents in the abstract strategic board game Go.

Hassabis didn't want to build an expert system, a brute-force computer preprogrammed with Go-specific algorithms and heuristics.

Rather than the chess-playing single-purpose Deep Blue system, he intended to construct a computer that adapted to play ing games in ways comparable to human chess champ Garry Kasparov.

He sought to build a machine that could learn to deal with new issues and have universality, which he defined as the ability to do a variety of jobs.

The reinforcement learning architecture was used by the company's AlphaGo artificial intelligence agent, which was built to compete against Lee Sedol, an eighteen-time world champion Go player.

Agents in the environment (in this example, the Go board) aim to attain a certain objective via reinforcement learning (winning the game).

The agents have perceptual inputs (such as visual input) as well as a statistical model based on environmental data.

The agent creates plans and goes through simulations of actions that will modify the model in order to accomplish the objective while collecting perceptual input and developing a representation of its surroundings.

The agent is always attempting to choose behaviors that will get it closer to its goal.

Hassabis argues that resolving all of the issues of goal-oriented agents in a reinforcement learning framework would be adequate to fulfill artificial general intelligence's promise.

He claims that biological systems work in a similar manner.

The dopamine system in human brains is responsible for implementing a reinforcement learning framework.

To master the game of Go, it usually takes a lifetime of study and practice.

Go includes a significantly broader search area than chess.

On the board, there are more potential Go locations than there are atoms in the cosmos.

It is also thought to be almost hard to develop an evaluation function that covers a significant portion of those places in order to determine where the next stone should be placed on the board.

Each game is essentially unique, and exceptional players describe their decisions as being guided by intuition rather than logic.

AlphaGo addressed these obstacles by leveraging data gathered from thousands of strong amateur games played by human Go players to train a neural network.

After that, AlphaGo played millions of games against itself, predicting how probable each side was to win based on the present board positions.

No specific assessment standards were required in this manner.

In Seoul, South Korea, in 2006, AlphaGo beat Go champion Lee Sedol (four games to one).

The way AlphaGo plays is considered cautious.

It favors diagonal stone placements known as "shoulder hits" to enhance victory while avoiding risk or point spread—thus putting less apparent focus on achieving territorial gains on the board.

In order to play any two-person game, AlphaGo has subsequently been renamed AlphaZero.

Without any human training data or sample games, AlphaZero learns from begin.

It only learns from random play.

After just four hours of training, AlphaZero destroyed Stock fish, one of the best free and open-source chess engines (28 games to 0 with 72 draws).

AlphaZero prefers the mobility of the pieces above their materiality while playing chess, which results in a creative style of play (similar to Go).

Another task the business took on was to develop a versatile, adaptable, and durable AI that could teach itself how to play more than 50 Atari video games just by looking at the pixels and scores on a video screen.

Hassabis introduced deep reinforcement learning, which combines reinforcement learning and deep learning, for this difficulty.

To create a neural network capable of reliable perceptual identification, deep neural networks need an input layer of observations, weighting mechanisms, and backpropagation.

In the instance of the Atari challenge, the network was trained using the 20,000-pixel values that flashed on the videogame screen at any given time.

Under deep learning, reinforcement learning takes the machine from the point where it perceives and recognizes a given input to the point where it can take meaningful action toward a goal.

In the Atari challenge, the computer learnt how to win over hundreds of hours of playtime by doing eighteen distinct exact joystick actions in a certain time-step.

To put it another way, a deep reinforcement learning machine is an end-to-end learning system capable of analyzing perceptual inputs, devising a strategy, and executing the strategy from start.

DeepMind was purchased by Google in 2014.

Hassabis continues to work at Google with DeepMind's deep learning technology.

Optical coherence tomography scans for eye disorders are used in one of these attempts.

By triaging patients and proposing how they should be referred for further treatment, DeepMind's AI system can swiftly and reliably diagnose from eye scans.

AlphaFold is a machine learning, physics, and structural biology system that predicts three-dimensional protein structures simply based on its genetic sequence.

AlphaFold took first place in the 2018 "world championship" for Critical Assessment of Techniques for Protein Structure Prediction, successfully predicting the most accurate structure for 25 of 43 proteins.

AlphaStar is currently mastering the real-time strategy game StarCraft II. 



Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


Deep Learning.



Further Reading:


“Demis Hassabis, Ph.D.: Pioneer of Artificial Intelligence.” 2018. Biography and interview. American Academy of Achievement. https://www.achievement.org/achiever/demis-hassabis-ph-d/.

Ford, Martin. 2018. Architects of Intelligence: The Truth about AI from the People Building It. Birmingham, UK: Packt Publishing Limited.

Gibney, Elizabeth. 2015. “DeepMind Algorithm Beats People at Classic Video Games.” Nature 518 (February 26): 465–66.

Gibney, Elizabeth. 2016. “Google AI Algorithm Masters Ancient Game of Go.” Nature 529 (January 27): 445–46.

Proudfoot, Kevin, Josh Rosen, Gary Krieg, and Greg Kohs. 2017. AlphaGo. Roco Films.


Artificial Intelligence - What Is Deep Learning?

 



Deep learning is a subset of methods, tools, and techniques in artificial intelligence or machine learning.

Learning in this case involves the ability to derive meaningful information from various layers or representations of any given data set in order to complete tasks without human instruction.

Deep refers to the depth of a learning algorithm, which usually involves many layers.

Machine learning networks involving many layers are often considered to be deep, while those with only a few layers are considered shallow.

The recent rise of deep learning over the 2010s is largely due to computer hardware advances that permit the use of computationally expensive algorithms and allow storage of immense datasets.

Deep learning has produced exciting results in the fields of computer vision, natural language, and speech recognition.

Notable examples of its application can be found in personal assistants such as Apple’s Siri or Amazon Alexa and search, video, and product recommendations.

Deep learning has been used to beat human champions at popular games such as Go and Chess.

Artificial neural networks are the most common form of deep learning.

Neural networks extract information through multiple stacked layers commonly known as hidden layers.





These layers contain artificial neurons, which are connected independently via weights to neurons in other layers.

Neural networks often involve dense or fully connected layers, meaning that each neuron in any given layer will connect to every neuron of its preceding layer.

This allows the network to learn increasingly intricate details or be trained by the data passing through each subsequent layer.

Part of what separates deep learning from other forms of machine learning is its ability to work with unstructured data.

There are no pre-arranged labels or characteristics in unstructured data.

Deep learning algorithms can learn to link their own features with unstructured inputs using several stacked layers.

This is done by the hierarchical approach in which a deep multi-layered learning algorithm offers more detailed information with each successive layer, enabling it to break down a very complicated issue into a succession of lesser ones.

This enables the network to learn more complex information or to be taught by data provided via successive layers.

The following steps are used to train a network: Small batches of tagged data are sent over the network first.

The loss of the network is determined by comparing predictions to real labels.

Back propagation is used to compute and transmit any inconsistencies to the weights.

Weights are tweaked gradually in order to keep losses to a minimum throughout each round of predictions.

The method is repeated until the network achieves optimum loss reduction and high accuracy of accurate predictions.

Deep learning has an advantage over many machine learning approaches and shallow learning networks since it can self-optimize its layers.

Machine or shallow learning methods need human participation in the preparation of unstructured data for input, often known as feature engineering, since they only have a few layers at most.





This may be a lengthy procedure that takes much too much time to be profitable, particularly if the dataset is enormous.

As a result of these factors, machine learning algorithms may seem to be a thing of the past.

Deep learning algorithms, on the other hand, come at a price.

Finding their own characteristics requires a large quantity of data, which isn't always accessible.

Furthermore, as data volumes get larger, so do the processing power and training time requirements, since the network will be dealing with a lot more data.

Depending on the number and kinds of layers utilized, training time will also rise.

Fortunately, online computing, which lets anybody to rent powerful machines for a price, allows anyone to run some of the most demanding deep learning networks.

Convolutional neural networks need hidden layers that are not included in the standard neural network design.

Deep learning of this kind is most often connected with computer vision projects, and it is now the most extensively used approach in that sector.

In order to obtain information from an image, basic convnet networks would typically utilize three kinds of layers: convolutional layers, pooling layers, and dense layers.

Convolutional layers gather information from low-level features such as edges and curves by sliding a window, or convolutional kernel, over the picture.

Subsequent stacked convolutional layers will repeat this procedure over the freshly generated layers of low-level features, looking for increasingly higher-level characteristics until the picture is fully understood.

Different hyperparameters may be modified to find different sorts of features, such as the size of the kernel or the distance it glides over the picture.

Pooling layers enable a network to learn higher-level elements of an image in a progressive manner by down sampling the picture along the way.

The network may become too computationally costly without a pooling layer built amid convolutional layers as each successive layer examines more detailed data.

In addition, the pooling layer reduces the size of an image while preserving important details.

These characteristics become translation invariant, which means that a feature seen in one portion of an image may be identified in a totally other region of the same picture.

The ability of a convolutional neural network to retain positional information is critical for image classification.

The ability of deep learning to automatically parse through unstructured data to find local features that it deems important while retaining positional information about how these features interact with one another demonstrates the power of convolutional neural networks.

Recurrent neural networks excel at sequence-based tasks like sentence completion and stock price prediction.

The essential idea is that, unlike previous instances of networks in which neurons just transmit information forward, neurons in recurrent neural networks feed information forward while also periodically looping the output back to itself throughout a time step.

Recurrent neural networks may be regarded of as having a rudimentary type of memory since each time step includes recurrent information from all previous time steps.

This is often utilized in natural language processing projects because recurrent neural networks can handle text in a way that is more human-like.

Instead of seeing a phrase as a collection of isolated words, a recurrent neural network may begin to analyse the mood of the statement or even create the following sentence autonomously depending on what has already been stated.

In many respects akin to human talents, deep learning may give strong techniques of evaluating unstructured data.

Unlike humans, deep learning networks never get tired.

Deep learning may substantially outperform standard machine learning techniques when given enough training data and powerful computers, particularly given its autonomous feature engineering capabilities.

Image classification, voice recognition, and self-driving vehicles are just a few of the fields that have benefited tremendously from deep learning research over the previous decade.

Many new exciting deep learning applications will emerge if current enthusiasm and computer hardware upgrades continue to grow.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 


Automatic Film Editing; Berger-Wolf, Tanya; Cheng, Lili; Clinical Decision Support Systems; Hassabis, Demis; Tambe, Milind.


Further Reading:


Chollet, François. 2018. Deep Learning with Python. Shelter Island, NY: Manning Publications.

Géron, Aurélien. 2019. Hands-On Machine Learning with Scikit-Learn, Keras and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems. Second edition. Sebastopol, CA: O’Reilly Media.

Goodfellow, Ian, Yoshua Bengio, and Aaron Courville. 2017. Deep Learning. Cambridge, MA: MIT Press.

Artificial Intelligence - What Is The Deep Blue Computer?





The color deep blue Since the 1950s, artificial intelligence has been utilized to play chess.

Chess has been studied for a variety of reasons.

First, since there are a limited number of pieces that may occupy distinct spots on the board, the game is simple to represent in computers.

The game is quite challenging to play.

There are a tremendous number of alternative states (piece configurations), and exceptional chess players evaluate both their own and their opponents' actions, which means they must predict what could happen many turns in the future.

Finally, chess is a competitive sport.

When a human competes against a computer, they are comparing intellect.

Deep Blue, the first computer to beat a reigning chess world champion, demonstrated that machine intelligence was catching up to humans in 1997.





Deep Blue was first released in 1985.

Feng-Hsiung Hsu, Thomas Anantharaman, and Murray Campbell created ChipTest, a chess-playing computer, while at Carnegie Mellon University.

The computer used brute force, generating and comparing move sequences using the alpha-beta search technique in order to determine the best one.

The generated positions would be scored by an evaluation function, enabling several locations to be compared.

Furthermore, the algorithm was adversarial, anticipating the opponent's movements in order to discover a means to defeat them.

If a computer has enough time and memory to execute the calculations, it can theoretically produce and evaluate an unlimited number of moves.

When employed in tournament play, however, the machine is restricted in both directions.

ChipTest was able to generate and assess 50,000 movements per second because to the usage of a single special-purpose chip.

The search process was enhanced in 1988 to add single extensions, which may rapidly discover a move that is superior to all other options.

ChipTest could construct bigger sequences and see farther ahead in the game by swiftly deciding superior actions, testing human players' foresight.

Mike Browne and Andreas Nowatzyk joined the team as ChipTest developed into Deep Thought.

Deep Thought was able to process about 700,000 chess moves per second because to two upgraded move genera tor chips.

Deep Thought defeated Bent Larsen in 1988, becoming the first computer to defeat a chess grandmaster.

After IBM recruited the majority of the development team, work on Deep Thought continued.

The squad has now set its sights on defeating the world's finest chess player.





Garry Kasparov was the finest chess player in the world at the time, as well as one of the best in his generation.

Kasparov, who was born in Baku, Azerbaijan, in 1963, won the Soviet Junior Championship when he was twelve years old.

He was the youngest player to qualify for the Soviet Chess Championship at the age of fifteen.

He won the under-twenty world championship when he was seventeen years old.

Kasparov was also the world's youngest chess champion, having won the championship at the age of twenty-two in 1985.

He held the championship until 1993, when he was forced to relinquish it after quitting the International Chess Federation.

He instantly won the Classical World Champion, which he held from 1993 to 2000.

Kasparov was the best chess player in the world for the majority of 1986 to 2005 (when he retired).

Deep Thought faced off against Kasparov in a two-game match in 1989.

Kasparov easily overcame Deep Thought by winning both games.

Deep Thought evolved into Deep Blue, which only appeared in two bouts, both of which were versus Kasparov.

When it came to the matches, Kasparov was at a disadvantage since he was up against Deep Blue.

He would scout his opponents before matches, as do many chess players, by watching them play or reading records of tournament matches to obtain insight into their play style and methods.

Deep Blue, on the other hand, has no prior match experience, having only played in private matches against the developers before to facing Kasparov.

As a result, Kasparov was unable to scout Deep Blue.

The developers, on the other hand, had access to Kasparov's match history, allowing them to tailor Deep Blue to his playing style.

Despite this, Kasparov remained confident, claiming that no machine would ever be able to defeat him.

On February 10, 1996, Deep Blue and Kasparov played their first six-game match in Philadelphia.

Deep Blue was the first machine to defeat a reigning world champion in a single game, winning the opening game.

After two draws and three victories, Kasparov would go on to win the match.

The contest drew international notice, and a rematch was planned.

Deep Blue and Kasparov faced off in another six-game contest on May 11, 1997, at the Equitable Center in New York City, after a series of improvements.

The match had a crowd and was broadcast.

At this point, Deep Blue was com posed of 400 special-purpose chips capable of searching through 200,000,000 chess moves per second.

Kasparov won the first game, while Deep Blue won the second.

The following three games were draws.

The final game would determine the match.

In this final game, Deep Blue capitalized on a mistake by Kasparov, causing the champion to concede after nineteen moves.

Deep Blue became the first machine ever to defeat a reigning world champion in a match.

Kasparov believed that a human had interfered with the match, providing Deep Blue with winning moves.

The claim was based on a move made in the second match, where Deep Blue made a sacrifice that (to many) hinted at a different strat egy than the machine had used in prior games.

The move made a significant impact on Kasparov, upsetting him for the remainder of the match and affecting his play.

Two factors may have combined to generate the move.

First, Deep Blue underwent modifications between the first and second game to correct strategic flaws, thereby influencing its strategy.

Second, designer Murray Campbell men tioned in an interview that if the machine could not decide which move to make, it would select one at random; thus there was a chance that surprising moves would be made.

Kasparov requested a rematch and was denied.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: 


Demis Hassabis.



Further Reading:


Campbell, Murray, A. Joseph Hoane Jr., and Feng-Hsiung Hsu. 2002. “Deep Blue.” Artificial Intelligence 134, no. 1–2 (January): 57–83.

Hsu, Feng-Hsiung. 2004. Behind Deep Blue: Building the Computer That Defeated the World Chess Champion. Princeton, NJ: Princeton University Press.

Kasparov, Garry. 2018. Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins. London: John Murray.

Levy, Steven. 2017. “What Deep Blue Tells Us about AI in 2017.” Wired, May 23, 2017. https://www.wired.com/2017/05/what-deep-blue-tells-us-about-ai-in-2017/.



Analog Space Missions: Earth-Bound Training for Cosmic Exploration

What are Analog Space Missions? Analog space missions are a unique approach to space exploration, involving the simulation of extraterrestri...