Showing posts with label Deep Learning. Show all posts
Showing posts with label Deep Learning. Show all posts

Artificial Intelligence - Machine Learning Regressions.

 


"Machine learning," a phrase originated by Arthur Samuel in 1959, is a kind of artificial intelligence that produces results without requiring explicit programming.

Instead, the system learns from a database on its own and improves over time.

Machine learning techniques have a wide range of applications (e.g., computer vision, natural language processing, autonomous gaming agents, classification, and regressions) and are used in practically every sector due to their resilience and simplicity of implementation (e.g., tech, finance, research, education, gaming, and navigation).

Machine learning algorithms may be generically classified into three learning types: supervised, unsupervised, and reinforcement, notwithstanding their vast range of applications.

Supervised learning is exemplified by machine learning regressions.

They use algorithms that have been trained on data with labeled continuous numerical outputs.

The quantity of training data or validation criteria required once the regression algorithm has been suitably trained and verified will depend on the issues being addressed.

For data with comparable input structures, the newly developed predictive models give inferred outputs.

These aren't static models.

They may be updated on a regular basis with new training data or by displaying the actual right outputs on previously unlabeled inputs.

Despite machine learning methods' generalizability, there is no one program that is optimal for all regression issues.

When choosing the best machine learning regression method for the present situation, there are a lot of things to think about (e.g., programming languages, available libraries, algorithm types, data size, and data structure).





There are machine learning programs that employ single or multivariable linear regression approaches, much like other classic statistical methods.

These models represent the connections between a single or several independent feature variables and a dependent target variable.

The models provide linear representations of the combined input variables as their output.

These models are only applicable to noncomplex and small data sets.

Polynomial regressions may be used with nonlinear data.

This necessitates the programmers' knowledge of the data structure, which is often the goal of utilizing machine learning models in the first place.

These methods are unlikely to be appropriate for most real-world data, but they give a basic starting point and might provide users with models that are straightforward to understand.

Decision trees, as the name implies, are tree-like structures that map the input features/attributes of programs to determine the eventual output goal.

The answer to the condition of that node splits into edges in a decision tree algorithm, which starts with the root node (i.e., an input variable).

A leaf is defined as an edge that no longer divides; an internal edge is defined as one that continues to split.

For example, age, weight, and family diabetic history might be used as input factors in a dataset of diabetic and nondiabetic patients to estimate the likelihood of a new patient developing diabetes.

The age variable might be used as the root node (e.g., age 40), with the dataset being divided into those who are more than or equal to 40 and those who are 39 and younger.

The model provides that leaf as the final output if the following internal node after picking more than or equal to 40 is whether or not a parent has/had diabetes, and the leaf estimates the affirmative responses to have a 60% likelihood of this patient acquiring diabetes.

This is a very basic decision tree that demonstrates the decision-making process.

Thousands of nodes may readily be found in a decision tree.

Random forest algorithms are just decision tree mashups.

They are made up of hundreds of decision trees, the ultimate outputs of which are the averaged outputs of the individual trees.

Although decision trees and random forests are excellent at learning very complex data structures, they are prone to overfitting.

With adequate pruning (e.g., establishing the n values limits for splitting and leaves) and big enough random forests, overfitting may be reduced.

Machine learning techniques inspired by the neural connections of the human brain are known as neural networks.


Neurons are the basic unit of neural network algorithms, much as they are in the human brain, and they are organized into numerous layers.

The input layer contains the input variables, the hidden layers include the layers of neurons (there may be numerous hidden levels), and the output layer contains the final neuron.

A single neuron in a feedforward process 

(a) takes the input feature variables, 

(b) multiplies the feature values by a weight, 

(c) adds the resultant feature products, together with a bias variable, and 

(d) passes the sums through an activation function, most often a sigmoid function.


The partial derivative computations of the previous neurons and neural layers are used to alter the weights and biases of each neuron.

Backpropagation is the term for this practice.


The output of the activation function of a single neuron is distributed to all neurons in the next hidden layer or final output layer.

As a result, the projected value is the last neuron's output.

Because neural networks are exceptionally adept at learning exceedingly complicated variable associations, programmers may spend less time reconstructing their data.

Neural network models, on the other hand, are difficult to interpret due to their complexity, and the intervariable relationships are largely hidden.

When used on extremely big datasets, neural networks operate best.

They need meticulous hyper-tuning and considerable processing capacity.

For data scientists attempting to comprehend massive datasets, machine learning has become the standard technique.

Machine learning systems are always being improved in terms of accuracy and usability by researchers.

Machine learning algorithms, on the other hand, are only as useful as the data used to train the model.

Poor data produces dramatically erroneous outcomes, while biased data combined with a lack of knowledge deepens societal disparities.

 



Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


Algorithmic Bias and Error; Automated Machine Learning; Deep Learning; Explainable AI; Gender and AI.



Further Reading:


Garcia, Megan. 2016. “Racist in the Machine: The Disturbing Implications of Algorithmic Bias.” World Policy Journal 33, no. 4 (Winter): 111–17.

GĂ©ron, Aurelien. 2019. Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems. Sebastopol, CA: O’Reilly.



Artificial Intelligence - Who Is Demis Hassabis (1976–)?




Demis Hassabis lives in the United Kingdom and works as a computer game programmer, cognitive scientist, and artificial intelligence specialist.

He is a cofounder of DeepMind, the company that created the AlphaGo deep learning engine.

Hassabis is well-known for being a skilled game player.

His passion for video games paved the way for his career as an artificial intelligence researcher and computer game entrepreneur.

Hassabis' parents noticed his chess prowess at a young age.

At the age of thirteen, he had achieved the status of chess master.

He's also a World Team Champion in the strategic board game Diplomacy, a World Series of Poker Main Event participant, and numerous World Pentamind and World Deca mentathlon Champions in the London Mind Sports Olympiad.

Hassabis began working at Bullfrog Games in Guildford, England, with renowned game designer Peter Molyneux when he was seventeen years old.

Bullfrog was notable for creating a variety of popular computer "god games." A god game is a computer-generated life simulation in which the user has power and influence over semiautonomous people in a diverse world.

Molyneux's Populous, published in 1989, is generally regarded as the first god game.

Has sabis co-designed and coded Theme Park, a simulation management game published by Bullfrog in 1994.

Hassabis dropped out of Bullfrog Games to pursue a degree at Queens' College, Cambridge.

In 1997, he earned a bachelor's degree in computer science.

Following graduation, Hassabis rejoined Molyneux at Lionhead Studios, a new gaming studio.

Hassabis worked on the artificial intelligence for the game Black & White, another god game in which the user reigned over a virtual island inhabited by different tribes, for a short time.

Hassabis departed Lionhead after a year to launch his own video game studio, Elixir Studios.

Hassabis has signed arrangements with major publishers such as Microsoft and Vivendi Universal.

Before closing in 2005, Elixir created a variety of games, including the diplomatic strategy simulation game Republic: The Revolution and the real-time strategy game Evil Genius.

Republic's artificial intelligence is modeled after Elias Canetti's 1960 book People and Authority, which explores problems concerning how and why crowds follow rulers' power (which Hassabis boiled down to force, money, and influence).

Republic required the daily programming efforts of twenty-five programmers over the course of four years.

Hassabis thought that the AI in the game would be valuable to academics.

Hassabis took a break from game creation to pursue additional studies at University College London (UCL).

In 2009, he received his PhD in Cognitive Neuroscience.

In his research of individuals with hippocampal injury, Hassabis revealed links between memory loss and poor imagination.

These findings revealed that the brain's memory systems may splice together recalled fragments of previous experiences to imagine hypothetical futures.

Hassabis continued his academic studies at the Gatsby Computational Neuroscience Unit at UCL and as a Wellcome Trust fellow for another two years.

He was also a visiting researcher at MIT and Harvard University.

Hassabis' cognitive science study influenced subsequent work on unsupervised learning, memory and one-shot learning, and imagination-based planning utilizing generic models in artificial intelligence.

With Shane Legg and Mustafa Suleyman, Hassabis cofounded the London-based AI start-up DeepMind Technologies in 2011.

The organization was focused on interdisciplinary science, bringing together premier academics and concepts from machine learning, neurology, engineering, and mathematics.

The mission of DeepMind was to create scientific breakthroughs in artificial intelligence and develop new artificial general-purpose learning capabilities.

Hassabis has compared the project to the Apollo Program for AI.

DeepMind was tasked with developing a computer capable of defeating human opponents in the abstract strategic board game Go.

Hassabis didn't want to build an expert system, a brute-force computer preprogrammed with Go-specific algorithms and heuristics.

Rather than the chess-playing single-purpose Deep Blue system, he intended to construct a computer that adapted to play ing games in ways comparable to human chess champ Garry Kasparov.

He sought to build a machine that could learn to deal with new issues and have universality, which he defined as the ability to do a variety of jobs.

The reinforcement learning architecture was used by the company's AlphaGo artificial intelligence agent, which was built to compete against Lee Sedol, an eighteen-time world champion Go player.

Agents in the environment (in this example, the Go board) aim to attain a certain objective via reinforcement learning (winning the game).

The agents have perceptual inputs (such as visual input) as well as a statistical model based on environmental data.

The agent creates plans and goes through simulations of actions that will modify the model in order to accomplish the objective while collecting perceptual input and developing a representation of its surroundings.

The agent is always attempting to choose behaviors that will get it closer to its goal.

Hassabis argues that resolving all of the issues of goal-oriented agents in a reinforcement learning framework would be adequate to fulfill artificial general intelligence's promise.

He claims that biological systems work in a similar manner.

The dopamine system in human brains is responsible for implementing a reinforcement learning framework.

To master the game of Go, it usually takes a lifetime of study and practice.

Go includes a significantly broader search area than chess.

On the board, there are more potential Go locations than there are atoms in the cosmos.

It is also thought to be almost hard to develop an evaluation function that covers a significant portion of those places in order to determine where the next stone should be placed on the board.

Each game is essentially unique, and exceptional players describe their decisions as being guided by intuition rather than logic.

AlphaGo addressed these obstacles by leveraging data gathered from thousands of strong amateur games played by human Go players to train a neural network.

After that, AlphaGo played millions of games against itself, predicting how probable each side was to win based on the present board positions.

No specific assessment standards were required in this manner.

In Seoul, South Korea, in 2006, AlphaGo beat Go champion Lee Sedol (four games to one).

The way AlphaGo plays is considered cautious.

It favors diagonal stone placements known as "shoulder hits" to enhance victory while avoiding risk or point spread—thus putting less apparent focus on achieving territorial gains on the board.

In order to play any two-person game, AlphaGo has subsequently been renamed AlphaZero.

Without any human training data or sample games, AlphaZero learns from begin.

It only learns from random play.

After just four hours of training, AlphaZero destroyed Stock fish, one of the best free and open-source chess engines (28 games to 0 with 72 draws).

AlphaZero prefers the mobility of the pieces above their materiality while playing chess, which results in a creative style of play (similar to Go).

Another task the business took on was to develop a versatile, adaptable, and durable AI that could teach itself how to play more than 50 Atari video games just by looking at the pixels and scores on a video screen.

Hassabis introduced deep reinforcement learning, which combines reinforcement learning and deep learning, for this difficulty.

To create a neural network capable of reliable perceptual identification, deep neural networks need an input layer of observations, weighting mechanisms, and backpropagation.

In the instance of the Atari challenge, the network was trained using the 20,000-pixel values that flashed on the videogame screen at any given time.

Under deep learning, reinforcement learning takes the machine from the point where it perceives and recognizes a given input to the point where it can take meaningful action toward a goal.

In the Atari challenge, the computer learnt how to win over hundreds of hours of playtime by doing eighteen distinct exact joystick actions in a certain time-step.

To put it another way, a deep reinforcement learning machine is an end-to-end learning system capable of analyzing perceptual inputs, devising a strategy, and executing the strategy from start.

DeepMind was purchased by Google in 2014.

Hassabis continues to work at Google with DeepMind's deep learning technology.

Optical coherence tomography scans for eye disorders are used in one of these attempts.

By triaging patients and proposing how they should be referred for further treatment, DeepMind's AI system can swiftly and reliably diagnose from eye scans.

AlphaFold is a machine learning, physics, and structural biology system that predicts three-dimensional protein structures simply based on its genetic sequence.

AlphaFold took first place in the 2018 "world championship" for Critical Assessment of Techniques for Protein Structure Prediction, successfully predicting the most accurate structure for 25 of 43 proteins.

AlphaStar is currently mastering the real-time strategy game StarCraft II. 



Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


Deep Learning.



Further Reading:


“Demis Hassabis, Ph.D.: Pioneer of Artificial Intelligence.” 2018. Biography and interview. American Academy of Achievement. https://www.achievement.org/achiever/demis-hassabis-ph-d/.

Ford, Martin. 2018. Architects of Intelligence: The Truth about AI from the People Building It. Birmingham, UK: Packt Publishing Limited.

Gibney, Elizabeth. 2015. “DeepMind Algorithm Beats People at Classic Video Games.” Nature 518 (February 26): 465–66.

Gibney, Elizabeth. 2016. “Google AI Algorithm Masters Ancient Game of Go.” Nature 529 (January 27): 445–46.

Proudfoot, Kevin, Josh Rosen, Gary Krieg, and Greg Kohs. 2017. AlphaGo. Roco Films.


Artificial Intelligence - Who Is Tanya Berger-Wolf? What Is The AI For Wildlife Conservation Software Non-profit, 'Wild Me'?

 


Tanya Berger-Wolf (1972–) is a professor at the University of Illinois at Chicago's Department of Computer Science (UIC).

Her contributions to computational ecology and biology, data science and network analysis, and artificial intelligence for social benefit have earned her acclaim.

She is a pioneer in the subject of computational population biology, which employs artificial intelligence algorithms, computational methodologies, social science research, and data collecting to answer questions about plants, animals, and people.

Berger-Wolf teaches multidisciplinary field courses with engineering students from UIC and biology students from Prince ton University at the Mpala Research Centre in Kenya.

She works in Africa because of its vast genetic variety and endangered species, which are markers of the health of life on the planet as a whole.

Her group is interested in learning more about the effects of the environment on social animal behavior, as well as what puts a species at danger.

Wildbook, a charity that develops animal conservation software, is her cofounder and director.

Berger-work Wolf's for Wildbook included a crowd-sourced project to photograph as many Grevy's zebras as possible in order to complete a full census of the endangered animals.

The group can identify each individual Grevy's zebra by its distinctive pattern of stripes, which acts as a natural bar code or fingerprint, after analyzing the photographs using artificial intelligence systems.

Using convolutional neural networks and matching algorithms, the Wildbook program recognizes animals from hundreds of thousands of images.

The census data is utilized to focus and invest resources in the zebras' preservation and survival.

The Wildbook deep learning program may be used to identify individual mem bers of any striped, spotted, notched, or wrinkled species.

Giraffe Spotter is Wild book software for giraffe populations.

Wildbook's website, which contains gallery photographs from handheld cameras and camera traps, crowdsources citizen-scientist accounts of giraffe encounters.

An intelligent agent extracts still images of tail flukes from uploaded YouTube videos for Wildbook's individual whale shark catalog.

The whale shark census revealed data that persuaded the International Union for Conservation of Nature to alter the status of the creatures from “vulnerable” to “endangered” on the IUCN Red List of Threatened Species.

The software is also being used by Wildbook to examine videos of hawksbill and green sea turtles.

Berger-Wolf also serves as the director of technology for the conservation organization Wild Me.

Machine vision artificial intelligence systems are used by the charity to recognize individual animals in the wild.

Wild Me keeps track of animals' whereabouts, migration patterns, and social groups.

The goal is to gain a comprehensive understanding of global diversity so that conservation policy can be informed.

Microsoft's AI for Earth initiative has partnered with Wild Me.

Berger-Wolf was born in Vilnius, Lithuania, in 1972.

She went to high school in St. Petersburg, Russia, and graduated from Hebrew University in Jerusalem with a bachelor's degree.

She received her doctorate from the University of Illinois at Urbana-Department Champaign's of Computer Science, and did postdoctoral work at the University of New Mexico and Rutgers University.

She has received the National Science Foundation CAREER Award, the Association for Women in Science Chicago Innovator Award, and the University of Illinois at Chicago Mentor of the Year Award.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.


See also: 

Deep Learning.


Further Reading


Berger-Wolf, Tanya Y., Daniel I. Rubenstein, Charles V. Stewart, Jason A. Holmberg, Jason Parham, and Sreejith Menon. 2017. “Wildbook: Crowdsourcing, Computer Vision, and Data Science for Conservation.” Chicago, IL: Bloomberg Data for Good Exchange Conference. https://arxiv.org/pdf/1710.08880.pdf.

Casselman, Anne. 2018. “How Artificial Intelligence Is Changing Wildlife Research.” National Geographic, November. https://www.nationalgeographic.com/animals/2018/11/artificial-intelligence-counts-wild-animals/.

Snow, Jackie. 2018. “The World’s Animals Are Getting Their Very Own Facebook.” Fast 

Company, June 22, 2018. https://www.fastcompany.com/40585495/the-worlds-animals-are-getting-their-very-own-facebook.



Artificial Intelligence - What Is Automated Machine Learning?

 


 

Machine learning algorithms are created with the goal of detecting and describing complex patterns in massive datasets.

By taking the uncertainty out of constructing instruments of convenience, automated machine learning (AutoML) aims to deliver these analytical tools to everyone interested in large data research.

"Computational analysis pipelines" is the name given to these instruments.

While there is still a lot of work to be done in automated machine learning, early achievements show that it will be an important tool in the arsenal of computer and data scientists.

It will be critical to customize these software packages to beginner users, enabling them to undertake difficult machine learning activities in a user-friendly way while still allowing for the integration of domain-specific knowledge and model interpretation and action.

These latter objectives have received less attention, but they will need to be addressed in future study before AutoML is able to tackle complicated real-world situations.

Automated machine learning is a relatively young field of research that has risen in popularity in the past ten years as a consequence of the widespread availability of strong open-source machine learning frameworks and high-performance computers.

AutoML software packages are currently available in both open-source and commercial versions.

Many of these packages allow for the exploration of machine learning pipelines, which can include feature transformation algorithms like discretization (which converts continuous equations, functions, models, and variables into discrete equations, functions, and so on for digital computers), feature engineering algorithms like principal components analysis (which removes large dimensions of "less important" data while keeping a subset of "more important" variables), and so on.

Bayesian optimization, ensemble techniques, and genetic programming are examples of stochastic search strategies utilized in AutoML.

Stochastic search techniques may be used to solve deterministic issues that have random noise or deterministic problems that have randomness injected into them.

New methods for extracting "signal from noise" in datasets, as well as finding insights and making predictions, are currently being developed and tested.

One of the difficulties with machine learning is that each algorithm examines data in a unique manner.

That is, each algorithm recognizes and classifies various patterns.

Linear support vector machines and k-nearest neighbor algorithms are excellent at detecting linear patterns, whereas k-nearest neighbor methods are effective at detecting nonlinear patterns.

The problem is that scientists don't know which algorithm(s) to employ when they start their job since they don't know what patterns they're looking for in the data.

The majority of users select an algorithm that they are acquainted with or that seems to operate well across a variety of datasets.

Some people may choose an algorithm because the models it generates are simple to compare.

There are a variety of reasons why various algorithms are used for data analysis.

Nonetheless, the approach selected may not be optimal for a particular data set.

This task is especially tough for a new user who may not be aware of the strengths and disadvantages of each algorithm.

A grid search is one way to address this issue.

Multiple machine learning algorithms and parameter settings are applied to a dataset in a systematic manner, with the results compared to determine which approach is the best.

This is a frequent strategy that may provide positive outcomes.

The grid search's drawback is that it may be computationally demanding when a large number of methods, each with several parameter values, need to be examined.

Random forests are classification algorithms comprised of numerous decision trees with a number of regularly used parameters that must be fine-tuned for best results on a specific dataset.

The accepted machine learning approach adjusts the data using parameters, which are configuration variables.

The maximum number of characteristics that may be used in the decision trees that are constructed and assessed is a typical parameter.

Automated machine learning may aid in the management of the complicated, computationally costly combinatorial explosion that occurs during the execution of specialized investigations.

A single parameter might have 10 distinct configurations, for example.

Another parameter might be the number of decision trees to be included in the forest, which could be 10 in total.

Another ten possible parameters might be the minimum amount of samples that would be permitted in the "leaves" of the decision trees.

Based on the examination of just three parameters, this example gives 1000 distinct alternative parameter configurations.

A data scientist looking at ten different machine learning methods, each with 1000 different parameter values, would have to undertake 10,000 different studies.

Hyperparameters, which are characteristics of the analyses that are established ahead of time and hence not learnt from the data, are added on top of these studies.

They are often established by the data scientist using a variety of rules of thumb or values derived from previous challenges.

Comparisons of numerous alternative cross-validation procedures or the influence of sample size on findings are examples of hyperparameter setups.

Hundreds of hyperparameter combinations may need to be assessed in a typical case.

The data scientist would have to execute a total of one million analyses using a mix of machine learning algorithms, parameter settings, and hyperparameter settings.

Given the computer resources available to the user, so many distinct studies might be prohibitive depending on the sample size of the data to be examined, the number of features, and the kinds of machine learning algorithms used.

Using a stochastic search to approximate the optimum mix of machine learning algorithms, parameter settings, and hyperparameter settings is an alternate technique.

Until a computational limit is reached, a random number generator is employed to sample from all potential possibilities.

Before making a final decision, the user manually explores various parameter and hyperparameter settings around the optimal technique.

This has the virtue of being computationally controllable, but it has the disadvantage of being stochastic, since chance may not explore the best combinations.

To address this, a stochastic search algorithm with a heuristic element—a practical technique, guide, or rule—may be created that can adaptively explore algorithms and settings while improving over time.

Because they automate the search for optimum machine learning algorithms and parameters, approaches that combine stochastic searches with heuristics are referred to as automated machine learning.

A stochastic search could begin by creating a variety of machine learning algorithm, parameter setting, and hyperparameter setting combinations at random and then evaluate each one using cross-validation, a method for evaluating the effectiveness of a machine learning model.

The best of these is chosen, modified at random, and assessed once again.

This procedure is continued until a computational limit or a performance goal has been met.

This stochastic search is guided by the heuristic algorithm.

Optimal search strategy development is a hot topic in academia right now.

There are various benefits to using AutoML.

To begin with, it has the potential to be more computationally efficient than the exhaustive grid search method.

Second, it makes machine learning more accessible by removing some of the guesswork involved in choosing the best machine learning algorithm and its many parameters for a particular dataset.

This allows even the most inexperienced user to benefit from machine learning.

Third, if generalizability measurements are included into the heuristic being utilized, it may provide more repeatable outcomes.

Fourth, including complexity metrics into the heuristic might result in more understandable outcomes.

Fifth, if expert knowledge is included into the heuristic, it may produce more actionable findings.

AutoML techniques do, however, present certain difficulties.

The first is the risk of overfitting, which occurs when numerous distinct methods are evaluated, resulting in an analysis that matches existing data too closely but does not fit or forecast unknown or fresh data.

The more analytical techniques used on a dataset, the more likely it is to learn the data's noise, resulting in a model that is hard to generalize to new data.

With any AutoML technique, this must be thoroughly handled.

Second, AutoML is computationally demanding in and of itself.

Third, AutoML approaches may create very complicated pipelines including several machine learning algorithms.

This may make interpretation considerably more challenging than just selecting a single analytic method.

Fourth, this is a very new field.

Despite some promising early instances, ideal AutoML solutions may not have yet been devised.



~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: Deep Learning.

Further Reading

Feurer, Matthias, Aaron Klein, Katharina Eggensperger, Jost Springenberg, Manuel Blum, and Frank Hutter. 2015. “Efficient and Robust Automated Machine Learning.” In Advances in Neural Information Processing Systems, 28. Montreal, Canada: Neural Information Processing Systems. http://papers.nips.cc/paper/5872-efficient-and-robust-automated-machine-learning.

Hutter, Frank Hutter, Lars Kotthoff, and Joaquin Vanschoren, eds. 2019. Automated Machine Learning: Methods, Systems, Challenges. New York: Springer.



Analog Space Missions: Earth-Bound Training for Cosmic Exploration

What are Analog Space Missions? Analog space missions are a unique approach to space exploration, involving the simulation of extraterrestri...