Artificial Intelligence - Who Is Demis Hassabis (1976–)?




Demis Hassabis lives in the United Kingdom and works as a computer game programmer, cognitive scientist, and artificial intelligence specialist.

He is a cofounder of DeepMind, the company that created the AlphaGo deep learning engine.

Hassabis is well-known for being a skilled game player.

His passion for video games paved the way for his career as an artificial intelligence researcher and computer game entrepreneur.

Hassabis' parents noticed his chess prowess at a young age.

At the age of thirteen, he had achieved the status of chess master.

He's also a World Team Champion in the strategic board game Diplomacy, a World Series of Poker Main Event participant, and numerous World Pentamind and World Deca mentathlon Champions in the London Mind Sports Olympiad.

Hassabis began working at Bullfrog Games in Guildford, England, with renowned game designer Peter Molyneux when he was seventeen years old.

Bullfrog was notable for creating a variety of popular computer "god games." A god game is a computer-generated life simulation in which the user has power and influence over semiautonomous people in a diverse world.

Molyneux's Populous, published in 1989, is generally regarded as the first god game.

Has sabis co-designed and coded Theme Park, a simulation management game published by Bullfrog in 1994.

Hassabis dropped out of Bullfrog Games to pursue a degree at Queens' College, Cambridge.

In 1997, he earned a bachelor's degree in computer science.

Following graduation, Hassabis rejoined Molyneux at Lionhead Studios, a new gaming studio.

Hassabis worked on the artificial intelligence for the game Black & White, another god game in which the user reigned over a virtual island inhabited by different tribes, for a short time.

Hassabis departed Lionhead after a year to launch his own video game studio, Elixir Studios.

Hassabis has signed arrangements with major publishers such as Microsoft and Vivendi Universal.

Before closing in 2005, Elixir created a variety of games, including the diplomatic strategy simulation game Republic: The Revolution and the real-time strategy game Evil Genius.

Republic's artificial intelligence is modeled after Elias Canetti's 1960 book People and Authority, which explores problems concerning how and why crowds follow rulers' power (which Hassabis boiled down to force, money, and influence).

Republic required the daily programming efforts of twenty-five programmers over the course of four years.

Hassabis thought that the AI in the game would be valuable to academics.

Hassabis took a break from game creation to pursue additional studies at University College London (UCL).

In 2009, he received his PhD in Cognitive Neuroscience.

In his research of individuals with hippocampal injury, Hassabis revealed links between memory loss and poor imagination.

These findings revealed that the brain's memory systems may splice together recalled fragments of previous experiences to imagine hypothetical futures.

Hassabis continued his academic studies at the Gatsby Computational Neuroscience Unit at UCL and as a Wellcome Trust fellow for another two years.

He was also a visiting researcher at MIT and Harvard University.

Hassabis' cognitive science study influenced subsequent work on unsupervised learning, memory and one-shot learning, and imagination-based planning utilizing generic models in artificial intelligence.

With Shane Legg and Mustafa Suleyman, Hassabis cofounded the London-based AI start-up DeepMind Technologies in 2011.

The organization was focused on interdisciplinary science, bringing together premier academics and concepts from machine learning, neurology, engineering, and mathematics.

The mission of DeepMind was to create scientific breakthroughs in artificial intelligence and develop new artificial general-purpose learning capabilities.

Hassabis has compared the project to the Apollo Program for AI.

DeepMind was tasked with developing a computer capable of defeating human opponents in the abstract strategic board game Go.

Hassabis didn't want to build an expert system, a brute-force computer preprogrammed with Go-specific algorithms and heuristics.

Rather than the chess-playing single-purpose Deep Blue system, he intended to construct a computer that adapted to play ing games in ways comparable to human chess champ Garry Kasparov.

He sought to build a machine that could learn to deal with new issues and have universality, which he defined as the ability to do a variety of jobs.

The reinforcement learning architecture was used by the company's AlphaGo artificial intelligence agent, which was built to compete against Lee Sedol, an eighteen-time world champion Go player.

Agents in the environment (in this example, the Go board) aim to attain a certain objective via reinforcement learning (winning the game).

The agents have perceptual inputs (such as visual input) as well as a statistical model based on environmental data.

The agent creates plans and goes through simulations of actions that will modify the model in order to accomplish the objective while collecting perceptual input and developing a representation of its surroundings.

The agent is always attempting to choose behaviors that will get it closer to its goal.

Hassabis argues that resolving all of the issues of goal-oriented agents in a reinforcement learning framework would be adequate to fulfill artificial general intelligence's promise.

He claims that biological systems work in a similar manner.

The dopamine system in human brains is responsible for implementing a reinforcement learning framework.

To master the game of Go, it usually takes a lifetime of study and practice.

Go includes a significantly broader search area than chess.

On the board, there are more potential Go locations than there are atoms in the cosmos.

It is also thought to be almost hard to develop an evaluation function that covers a significant portion of those places in order to determine where the next stone should be placed on the board.

Each game is essentially unique, and exceptional players describe their decisions as being guided by intuition rather than logic.

AlphaGo addressed these obstacles by leveraging data gathered from thousands of strong amateur games played by human Go players to train a neural network.

After that, AlphaGo played millions of games against itself, predicting how probable each side was to win based on the present board positions.

No specific assessment standards were required in this manner.

In Seoul, South Korea, in 2006, AlphaGo beat Go champion Lee Sedol (four games to one).

The way AlphaGo plays is considered cautious.

It favors diagonal stone placements known as "shoulder hits" to enhance victory while avoiding risk or point spread—thus putting less apparent focus on achieving territorial gains on the board.

In order to play any two-person game, AlphaGo has subsequently been renamed AlphaZero.

Without any human training data or sample games, AlphaZero learns from begin.

It only learns from random play.

After just four hours of training, AlphaZero destroyed Stock fish, one of the best free and open-source chess engines (28 games to 0 with 72 draws).

AlphaZero prefers the mobility of the pieces above their materiality while playing chess, which results in a creative style of play (similar to Go).

Another task the business took on was to develop a versatile, adaptable, and durable AI that could teach itself how to play more than 50 Atari video games just by looking at the pixels and scores on a video screen.

Hassabis introduced deep reinforcement learning, which combines reinforcement learning and deep learning, for this difficulty.

To create a neural network capable of reliable perceptual identification, deep neural networks need an input layer of observations, weighting mechanisms, and backpropagation.

In the instance of the Atari challenge, the network was trained using the 20,000-pixel values that flashed on the videogame screen at any given time.

Under deep learning, reinforcement learning takes the machine from the point where it perceives and recognizes a given input to the point where it can take meaningful action toward a goal.

In the Atari challenge, the computer learnt how to win over hundreds of hours of playtime by doing eighteen distinct exact joystick actions in a certain time-step.

To put it another way, a deep reinforcement learning machine is an end-to-end learning system capable of analyzing perceptual inputs, devising a strategy, and executing the strategy from start.

DeepMind was purchased by Google in 2014.

Hassabis continues to work at Google with DeepMind's deep learning technology.

Optical coherence tomography scans for eye disorders are used in one of these attempts.

By triaging patients and proposing how they should be referred for further treatment, DeepMind's AI system can swiftly and reliably diagnose from eye scans.

AlphaFold is a machine learning, physics, and structural biology system that predicts three-dimensional protein structures simply based on its genetic sequence.

AlphaFold took first place in the 2018 "world championship" for Critical Assessment of Techniques for Protein Structure Prediction, successfully predicting the most accurate structure for 25 of 43 proteins.

AlphaStar is currently mastering the real-time strategy game StarCraft II. 



Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


Deep Learning.



Further Reading:


“Demis Hassabis, Ph.D.: Pioneer of Artificial Intelligence.” 2018. Biography and interview. American Academy of Achievement. https://www.achievement.org/achiever/demis-hassabis-ph-d/.

Ford, Martin. 2018. Architects of Intelligence: The Truth about AI from the People Building It. Birmingham, UK: Packt Publishing Limited.

Gibney, Elizabeth. 2015. “DeepMind Algorithm Beats People at Classic Video Games.” Nature 518 (February 26): 465–66.

Gibney, Elizabeth. 2016. “Google AI Algorithm Masters Ancient Game of Go.” Nature 529 (January 27): 445–46.

Proudfoot, Kevin, Josh Rosen, Gary Krieg, and Greg Kohs. 2017. AlphaGo. Roco Films.


Artificial Intelligence - Intelligent Tutoring Systems.

  



Intelligent tutoring systems are artificial intelligence-based instructional systems that adapt instruction based on a variety of learner variables, such as dynamic measures of students' ongoing knowledge growth, personal interest, motivation to learn, affective states, and aspects of how they self-regulate their learning.

For a variety of problem areas, such as STEM, computer programming, language, and culture, intelligent tutoring systems have been created.

Complex problem-solving activities, collaborative learning activities, inquiry learning or other open-ended learning activities, learning through conversations, game-based learning, and working with simulations or virtual reality environments are among the many types of instructional activities they support.

Intelligent tutoring systems arose from a field of study known as AI in Education (AIED).

MATHia® (previously Cognitive Tutor), SQL-Tutor, ALEKS, and Rea soning Mind's Genie system are among the commercially successful and widely used intelligent tutoring systems.

Intelligent tutoring systems are frequently more successful than conventional kinds of training, according to six comprehensive meta-analyses.

This efficiency might be due to a number of things.

First, intelligent tutoring systems give adaptive help inside issues, allowing classroom instructors to scale one-on-one tutoring beyond what they could do without it.

Second, they allow adaptive problem selection based on the understanding of particular pupils.

Third, cognitive task analysis, cognitive theory, and learning sciences ideas are often used in intelligent tutoring systems.

Fourth, the employment of intelligent tutoring tools in so-called blended classrooms may result in favorable cultural adjustments by allowing teachers to spend more time working one-on-one with pupils.

Fifth, more sophisticated tutoring systems are repeatedly developed using new approaches from the area of educational data mining, based on data.

Finally, Open Learner Models (OLMs), which are visual representations of the system's internal student model, are often used in intelligent tutoring systems.

OLMs have the potential to assist learners in productively reflecting on their current level of learning.

Model-tracing tutors, constraint-based tutors, example-tracing tutors, and ASSISTments are some of the most common intelligent tutoring system paradigms.

These paradigms vary in how they are created, as well as in tutoring behaviors and underlying representations of domain knowledge, student knowledge, and pedagogical knowledge.

For domain reasoning (e.g., producing future steps in a problem given a student's partial answer), assessing student solutions and partial solutions, and student modeling, intelligent tutoring systems use a number of AI approaches (i.e., dynamically estimating and maintaining a range of learner vari ables).

To increase systems' student modeling skills, a range of data mining approaches (including Bayesian models, hidden Markov models, and logistic regression models) are increasingly being applied.

Machine learning approaches, such as reinforcement learning, are utilized to build instructional policies to a lesser extent.

Researchers are looking at concepts for the smart classroom of the future that go beyond the capabilities of present intelligent tutoring technologies.

AI systems, in their visions, typically collaborate with instructors and students to provide excellent learning experiences for all pupils.

Recent research suggests that rather than designing intelligent tutoring systems to handle all aspects of adaptation, such as providing teachers with real-time analytics from an intelligent tutoring system to draw their attention to learners who may need additional support, promising approaches that adaptively share regulation of learning processes across students, teachers, and AI systems—rather than designing intelligent tutoring systems to handle all aspects of adaptation, for example—by providing teachers with real-time analytics from an intelligent tutoring system to draw their attention to learners who may need additional support.



Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


Natural Language Processing and Speech Understanding; Workplace Automation.



Further Reading:




Aleven, Vincent, Bruce M. McLaren, Jonathan Sewall, Martin van Velsen, Octav Popescu, Sandra Demi, Michael Ringenberg, and Kenneth R. Koedinger. 2016. “Example-Tracing Tutors: Intelligent Tutor Development for Non-Programmers.” International Journal of Artificial Intelligence in Education 26, no. 1 (March): 224–69.

Aleven, Vincent, Elizabeth A. McLaughlin, R. Amos Glenn, and Kenneth R. Koedinger. 2017. “Instruction Based on Adaptive Learning Technologies.” In Handbook of Research on Learning and Instruction, Second edition, edited by Richard E. Mayer and Patricia Alexander, 522–60. New York: Routledge.

du Boulay, Benedict. 2016. “Recent Meta-Reviews and Meta-Analyses of AIED Systems.” International Journal of Artificial Intelligence in Education 26, no. 1: 536–37.

du Boulay, Benedict. 2019. “Escape from the Skinner Box: The Case for Contemporary Intelligent Learning Environments.” British Journal of Educational Technology, 50, no. 6: 2902–19.

Heffernan, Neil T., and Cristina Lindquist Heffernan. 2014. “The ASSISTments Ecosystem: Building a Platform that Brings Scientists and Teachers Together for Minimally Invasive Research on Human Learning and Teaching.” International Journal of Artificial Intelligence in Education 24, no. 4: 470–97.

Koedinger, Kenneth R., and Albert T. Corbett. 2006. “Cognitive Tutors: Technology Bringing Learning Sciences to the Classroom.” In The Cambridge Handbook of the Learning Sciences, edited by Robert K. Sawyer, 61–78. New York: Cambridge University Press.

Mitrovic, Antonija. 2012. “Fifteen Years of Constraint-Based Tutors: What We Have Achieved and Where We Are Going.” User Modeling and User-Adapted Interaction 22, no. 1–2: 39–72.

Nye, Benjamin D., Arthur C. Graesser, and Xiangen Hu. 2014. “AutoTutor and Family: A Review of 17 Years of Natural Language Tutoring.” International Journal of Artificial Intelligence in Education 24, no. 4: 427–69.

Pane, John F., Beth Ann Griffin, Daniel F. McCaffrey, and Rita Karam. 2014. “Effectiveness of Cognitive Tutor Algebra I at Scale.” Educational Evaluation and Policy Analysis 36, no. 2: 127–44.

Schofield, Janet W., Rebecca Eurich-Fulcer, and Chen L. Britt. 1994. “Teachers, Computer Tutors, and Teaching: The Artificially Intelligent Tutor as an Agent for Classroom Change.” American Educational Research Journal 31, no. 3: 579–607.

VanLehn, Kurt. 2016. “Regulative Loops, Step Loops, and Task Loops.” International Journal of Artificial Intelligence in Education 26, no. 1: 107–12.


Artificial Intelligence - Intelligent Transportation.

  



The use of advanced technology, artificial intelligence, and control systems to manage highways, cars, and traffic is known as intelligent transportation.

Traditional American highway engineering disciplines such as driver routing, junction management, traffic distribution, and system-wide command and control inspired the notion.

Because it attempts to embed monitoring equipment in pavements, signaling systems, and individual automobiles in order to decrease congestion and increase safety, intelligent transportation has significant privacy and security considerations.

Highway engineers of the 1950s and 1960s were commonly referred to as "communications engineers," since they used information in the form of signs, signals, and statistics to govern vehicle and highway interactions and traffic flow.

During these decades, computers were primarily employed to simulate crossings and calculate highway capacity.

S. Y. Wong's Traffic Simulator, which used the facilities of the Institute for Advanced Investigate (IAS) computer in Princeton, New Jersey, to study traffic engineering, is one of the early applications of computing technology in this respect.

To show road systems, traffic regulations, driver behavior, and weather conditions, Wong's mid-1950s simulator used computational tools previously established to investigate electrical networks.

Dijkstra's Algorithm, named for computer scientist Edsger Dijkstra, was a pioneering use of information technology to automatically construct and map least distance routes.

In 1959, Dijkstra created an algorithm that finds the shortest path between a starting point and a destination point on a map.

Online mapping systems still use Dijsktra's routing method, and it has significant economic utility in traffic management planning.

Other algorithms and automated devices for guidance control, traffic signaling, and ramp metering were developed by the automotive industry during the 1960s.

Many of these devices, such as traffic right-of-way signals coupled to transistorized fixed-time control boxes, synchronized signals, and traffic-actuated vehicle pressure detectors, became commonplace among the public.

Despite this, traffic control system simulation in experimental labs has remained an essential use of information technology in transportation.

Despite the engineers' best efforts, the rising popularity of vehicles and long-distance travel stressed the national highway system in the 1960s, resulting in a "crisis" in surface transportation network operations.

By the mid-1970s, engineers were considering information technology as a viable alternative to traditional signaling, road expansion, and grade separation approaches for decreasing traffic congestion and increasing safety.

Much of this work was focused on the individual driver, who was supposed to be able to utilize data to make real-time choices that would make driving more enjoyable.

With onboard instrument panels and diagnostics, computing technology promised to make navigation simpler and maximize safety while lowering journey times, particularly when combined with other technologies like as radar, the telephone, and television cameras.

Automobiling became more informed as computer chip prices dropped in the 1980s.

Electronic fuel gauges and oil level indicators, as well as digital speedometer readouts and other warnings, were added to high-end automobile models.

Most states' television broadcasts started delivering pre-trip travel information and weather briefings in the 1990s, based on data and video collected automatically by roadside sensing stations and video cameras.

These summaries were made accessible at roadside way stations, where passengers could get live text-based weather forecasts and radar pictures on public computer displays, as well as as text on pagers and mobile phones.

Few of these innovations have a significant influence on personal privacy or liberty.

However, in 1991, Congress approved the Intermodal Surface Transportation Efficiency Act (ISTEA or "Ice Tea"), which allowed $660 million for the creation of the country's Intelligent Vehicle Highway System (IVHS), which was coauthored by Secretary of Transportation Norman Mineta.

Improved safety, decreased congestion, greater mobility, energy efficiency, economic productivity, increased usage of public transit, and environmental cleanup are among the aims set out in the act.

All of these objectives would be achieved via the effective use of information technology to facilitate transportation in the aggregate as well as on a vehicle-by-vehicle basis.

Hundreds of projects were funded to provide new infrastructure and possibilities for travel and traffic management, public transit management, electronic toll payment, commercial fleet management, emergency management, and vehicle safety, among other things.

While some applications of intelligent transportation technology remained underutilized in the 1990s—for example, carpool matching—other applications became virtually standard on American highways: for example, onboard safety monitoring and precrash deployment of airbags in cars, or automated weigh stations, roadside safety inspections, and satellite Global Positioning System (GPS) tracking for tractor-trailers.

Private enterprise had joined the government in augmenting several of these services by the mid-1990s.

OnStar, a factory-installed telematics system that utilizes GPS and cell phone communications to give route guidance, summon emergency and roadside assistance services, track stolen cars, remotely diagnose mechanical faults, and access locked doors, is included in all General Motors vehicles.

Automobile manufacturers also started experimenting with infrared sensors coupled to expert systems for autonomous collision avoidance, as well as developing technology that allows automobiles to be "platooned" into huge groups of closely spaced vehicles to optimize highway lane capacity.

The computerized toll and traffic management system, which was launched in the 1990s, was perhaps the most widely used implementation of intelligent transportation technology (ETTM).

By placing a radio transponder on their cars, ETTM enabled drivers to pay their tolls on highways without having to slow down.

Florida, New York State, New Jersey, Michigan, Illinois, and California were all using ETTM systems by 1995.

Since then, ETTM has expanded to a number of additional states as well as internationally.

Because of its potential for government intrusion, intelligent transportation initiatives have sparked debate.

Hong Kong's government deployed electronic road pricing (ERP) in the mid-1980s, with radar transmitter-receivers triggered when cars went through tolled tunnels or highway checkpoints.

This system's billing bills supplied drivers with a full record of where they had gone and when they had been there.

The system was put on hold before the British turned Hong Kong over to the Chinese in 1997, due to concerns about potential human rights violations.

On the other hand, the basic purpose of political surveillance is sometimes broadened to include transportation goals.

The UK government, for example, placed street-based closed-circuit television cameras (CCTV) in a "Ring of Steel" around London's financial sector in 1993 to defend against Irish Republican Army terror bombs.

Extreme CCTV enhanced monitoring of downtown London ten years later, in 2003, to incorporate several infrared illuminators enabling the "capture" of license plate numbers on autos.

A daily usage tax was imposed on drivers entering the crowded downtown area.

Vehicles with a unique identification like the Vehicle Identification Number (VIN) or an electronic tag may be tracked using technologies like GPS and electronic payment tollbooth software, for example.

This opens the door to continuous monitoring and tracking of driving choices, as well as the prospect of permanent movement records.

Individual toll crossing locations and timings, the car's average speed, and photos of all passengers are routinely obtained by intelligent transportation surveillance.

In the early 2000s, state transportation bureaus in Florida and California utilized comparable data to send out surveys to individual drivers who used certain roads.

Several state motor vehicle agencies have also contemplated establishing "dual-use" intelligent transportation databanks to supply or sell traffic and driver-related data to law enforcement and marketers.

Artificial intelligence methods are becoming an increasingly important part of intelligent transportation planning, especially as large amounts of data from actual driving experiences are now being collected.

They're being used more and more to control vehicles, predict traffic congestion, and meter traffic, as well as reduce accident rates and fatalities.

Artificial neural networks, genetic algorithms, fuzzy logic, and expert systems are among the AI techniques already in use in various intelligent transportation applications, both singly and in combination.

These methods are being used to develop new vehicle control systems for autonomous and semiautonomous driving, automatic braking control, and real-time energy consumption and emissions monitoring.

Surtrac, for example, is a scalable, adaptive traffic control system created by Carnegie Mellon University, which uses theoretical modeling and artificial intelligence algorithms.

The amount of traffic on particular roadways and intersections may change dramatically throughout the day.

Traditional automated traffic control technology adapts to established patterns on a set timetable or depends on traffic control observations from a central location.

Intersections can communicate with one another, and automobiles may possibly exchange their user-programmed travel paths, thanks to adaptive traffic management.

Vivacity Labs in the United Kingdom employs video sensors at junctions and AI technology to monitor and anticipate traffic conditions in real time during an individual motorist's trip, as well as perform mobility assessments for enterprises and local government bodies at the city scale.

Future paths in intelligent transportation research and development may be determined by fuel costs and climate change consequences.

When oil costs are high, rules may encourage sophisticated traveler information systems that advise drivers to the best routes and departure times, as well as expected (and costly) idling and wait periods.

If traffic grows more crowded, cities may use smart city technologies like real-time traffic and parking warnings, automated incident detection and vehicle recovery, and linked surroundings to govern human-piloted vehicles, autonomous automobiles, and mass transit systems.

More cities throughout the globe are going to use dynamic cordon pricing, which entails calculating and collecting fees to enter or drive in crowded regions.

Vehicle occupancy detection monitors and vehicle categorization detectors are examples of artificial intelligence systems that enable congestion charging.

 



Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


Driverless Cars and Trucks; Trolley Problem.


Further Reading:



Alpert, Sheri. 1995. “Privacy and Intelligent Highway: Finding the Right of Way.” Santa Clara Computer and High Technology Law Journal 11: 97–118.

Blum, A. M. 1970. “A General-Purpose Digital Traffic Simulator.” Simulation 14, no. 1: 9–25.

Diebold, John. 1995. Transportation Infostructures: The Development of Intelligent Transportation Systems. Westport, CT: Greenwood Publishing Group.

Garfinkel, Simson L. 1996. “Why Driver Privacy Must Be a Part of ITS.” In Converging Infrastructures: Intelligent Transportation and the National Information Infrastructure, edited by Lewis M. Branscomb and James H. Keller, 324–40. Cambridge, MA: MIT Press.

High-Tech Highways: Intelligent Transportation Systems and Policy. 1995. Washington, DC: Congressional Budget Office.

Machin, Mirialys, Julio A. Sanguesa, Piedad Garrido, and Francisco J. Martinez. 2018. “On the Use of Artificial Intelligence Techniques in Intelligent Transportation Systems.” In IEEE Wireless Communications and Networking Conference Worshops (WCNCW), 332–37. Piscataway, NJ: IEEE.

Rodgers, Lionel M., and Leo G. Sands. 1969. Automobile Traffic Signal Control Systems. Philadelphia: Chilton Book Company.

Wong, S. Y. 1956. “Traffic Simulator with a Digital Computer.” In Proceedings of the Western Joint Computer Conference, 92–94. New York: American Institute of Electrical Engineers.




Artificial Intelligence - Agriculture Using Intelligent Sensing.

  



From Neolithic tools that helped humans transition from hunter gatherers to farmers to the British Agricultural Revolution, which harnessed the power of the Industrial Revolution to increase yields (Noll 2015), technological innovation has always driven food production.

Today, agriculture is highly technical, as scientific discoveries continue to be integrated into production systems.

Intelligent Sensing Agriculture is one of the newest additions to a long history of integrating cutting-edge technology to the production, processing, and distribution of food.

These technological gadgets are generally used to achieve the dual aim of boosting crop yields while lowering agricultural system environmental effects.

Intelligent sensors are devices that, as part of their stated duty, may execute a variety of complicated operations.

These sensors should not be confused with "smart" sensors or instrument packages that can collect data from the physical environment (Cleaveland 2006).

Intelligent sensors are unique in that they not only detect but also react to varied circumstances in nuanced ways depending on the information they collect.

"In general, sensors are devices that measure a physical quantity and turn the result into a signal that can be read by an observer or instrument; however, intelligent sensors may analyze measured data" (Bialas 2010, 822).

Their capacity to govern their own processes in response to environmental stimuli is what distinguishes them as "intelligent." They collect fundamental elements from various factors (such as light, temperature, and humidity) and then develop intermediate responses to these aspects (Yamasaki 1996).

The capacity to do sophisticated learning, information processing, and adaptation all in one integrated package is required for this feature.

These sensor packages are employed in a broad variety of applications, from aerospace to health care, and their scope is growing.

While all of these applications are novel, the use of intelligent sensors in agriculture might have a broad variety of social advantages owing to the technology.

There is a pressing need to boost the productivity of existing productive agricultural fields.

In 2017, the world's population approached 7.6 billion people, according to the United Nations (2017).

The majority of the world's arable land, on the other hand, is already being used for food.

Currently, over half of the land in the United States is used to generate agricultural goods, whereas 40% of the land in the United Kingdom is utilized to create agricultural products (Thompson 2010).

Due to a scarcity of undeveloped land, agricultural production must skyrocket within the next 10 years, yet environmental effects must be avoided in order to boost overall sustainability and long-term productivity.

Intelligent sensors aid in maximizing the use of all available resources, lowering agricultural expenses, and limiting the use of hazardous inputs (Pajares 2011).

"When nutrients in the soil, humidity, solar radiation, weed density, and a wide range of other factors and data affecting production are known," Pajares says, "the situation improves, and the use of chemical products such as fertilizers, herbicides, and other pollutants can be significantly reduced" (Pajares 2011, 8930).

The majority of intelligent sensor applications in this context may be classified as "precise agriculture," which is described as "information-intensive crop management that use technology to watch, react, and quantify crucial factors." When combined with computer networks, this data enables for field administration from afar.

Combinations of several kinds of sensors (such as temperature and image-based devices) enable for monitoring and control regardless of distance.

Intelligent sensors gather in-field data to aid agricultural production management in a variety of ways.

The following are some examples of specialized applications: Unmanned Aerial Vehicles (UAVs) with a suite of sensors detect fires (Pajares 2011); LIDAR sensors paired with GPS identify trees and estimate forest biomass; and capacitance probes measure soil moisture while reflectometers determine crop moisture content.

Other sensor types may identify weeds, evaluate soil pH, quantify carbon metabolism in peatlands, regulate irrigation systems, monitor temperatures, and even operate machinery like sprayers and tractors.

When equipped with sophisticated sensors, robotic devices might be utilized to undertake many of the tasks presently performed by farmers.

Modern farming is being revolutionized by intelligent sensors, and as technology progresses, chores will become more automated.

Agricultural technology, on the other hand, have a long history of public criticism.

One criticism of the use of intelligent sensors in agriculture is that it might have negative societal consequences.

While these devices improve agricultural systems' efficiency and decrease environmental problems, they may have a detrimental influence on rural populations.

Technological advancements have revolutionized the way farmers manage their crops and livestock since the invention of the first plow.

Intelligent sensors may allow tractors, harvesters, and other equipment to operate without the need for human involvement, potentially altering the way food is produced.

This might lower the number of people required in the agricultural industry, and consequently the number of jobs available in rural regions, where agricultural production is mostly conducted.

Furthermore, this technology may be too costly for farmers, increasing the likelihood of small farms failing.

The so-called "technology treadmill" is often blamed for such failures.

This term describes a situation in which a small number of farmers adopt a new technology and profit because their production costs are lower than their competitors'.

Increased earnings are no longer possible when more producers embrace this technology and prices decline.

It becomes important to use this new technology in order to compete in a market where others are doing so.

Farmers who do not implement the technology are eventually forced out of business, while those who do thrive.

The use of clever sensors may help to keep the technological treadmill going.

Regard less, the sensors have a broad variety of social, economic, and ethical effects that will need to be examined, as the technology advances.

 


Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


Workplace Automation.



Further Reading:



Bialas, Andrzej. 2010. “Intelligent Sensors Security.” Sensors 10, no. 1: 822–59.

Cleaveland, Peter. 2006. “What Is a Smart Sensor?” Control Engineering, January 1, 2006. https://www.controleng.com/articles/what-is-a-smart-sensor/.

Noll, Samantha. 2015. “Agricultural Science.” In A Companion to the History of American Science, edited by Mark Largent and Georgina Montgomery. New York: Wiley-Blackwell.

Pajares, Gonzalo. 2011. “Advances in Sensors Applied to Agriculture and Forestry.” Sensors 11, no. 9: 8930–32.

Thompson, Paul B. 2009. “Philosophy of Agricultural Technology.” In Philosophy of Technology and Engineering Sciences, edited by Anthonie Meijers, 1257–73. Handbook of the Philosophy of Science. Amsterdam: North-Holland.

Thompson, Paul B. 2010. The Agrarian Vision: Sustainability and Environmental Ethics. Lexington: University Press of Kentucky.

United Nations, Department of Economic and Social Affairs. 2017. World Population Prospects: The 2017 Revision. New York: United Nations.

Yamasaki, Hiro. 1996. “What Are the Intelligent Sensors.” In Handbook of Sensors and Actuators, vol. 3, edited by Hiro Yamasaki, 1–17. Amsterdam: Elsevier Science B.V.



Artificial Intelligence - The Human Brain Project

  



The European Union's major brain research endeavor is the Human Brain Project.

The project, which encompasses Big Science in terms of the number of participants and its lofty ambitions, is a multidisciplinary coalition of over one hundred partner institutions and includes professionals from the disciplines of computer science, neurology, and robotics.

The Human Brain Project was launched in 2013 as an EU Future and Emerging Technologies initiative with a budget of over one billion euros.

The ten-year project aims to make fundamental advancements in neuroscience, medicine, and computer technology.

Researchers working on the Human Brain Project hope to learn more about how the brain functions and how to imitate its computing skills.

Human Brain Organization, Systems and Cognitive Neuroscience, Theoretical Neuroscience, and implementations such as the Neuroinformatics Platform, Brain Simulation Platform, Medical Informatics Platform, and Neuromorphic Computing Platform are among the twelve subprojects of the Human Brain Project.

Six information and communication technology platforms were released by the Human Brain Project in 2016 as the main research infrastructure for ongoing brain research.

The project's research is focused on the creation of neuromorphic (brain-inspired) computer chips, in addition to infrastructure established for gathering and distributing data from the scientific community.

BrainScaleS is a subproject that uses analog signals to simulate the neuron and its synapses.

SpiNNaker (Spiking Neural Network Design) is a supercomputer architecture based on numerical models operating on special multicore digital devices.

The Neurorobotic Platform is another ambitious subprogram, where "virtual brain models meet actual or simulated robot bodies" (Fauteux 2019).

The project's modeling of the human brain, which includes 100 billion neurons with 7,000 synaptic connections to other neurons, necessitates massive computational resources.

Computer models of the brain are created on six supercomputers at research sites around Europe.

These models are currently being used by project researchers to examine illnesses.

The show has been panned.

Scientists protested in a 2014 open letter to the European Commission about the program's lack of openness and governance, as well as the program's small breadth of study in comparison to its initial goal and objectives.

The Human Brain Project has a new governance structure as a result of an examination and review of its financing procedures, needs, and stated aims.

 



Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


Blue Brain Project; Cognitive Computing; SyNAPSE.


Further Reading:


Amunts, Katrin, Christoph Ebell, Jeff Muller, Martin Telefont, Alois Knoll, and Thomas Lippert. 2016. “The Human Brain Project: Creating a European Research Infrastructure to Decode the Human Brain.” Neuron 92, no. 3 (November): 574–81.

Fauteux, Christian. 2019. “The Progress and Future of the Human Brain Project.” Scitech Europa, February 15, 2019. https://www.scitecheuropa.eu/human-brain-project/92951/.

Markram, Henry. 2012. “The Human Brain Project.” Scientific American 306, no. 6 

(June): 50–55.

Markram, Henry, Karlheinz Meier, Thomas Lippert, Sten Grillner, Richard Frackowiak, 

Stanislas Dehaene, Alois Knoll, Haim Sompolinsky, Kris Verstreken, Javier 

DeFelipe, Seth Grant, Jean-Pierre Changeux, and Alois Sariam. 2011. “Introduc￾ing the Human Brain Project.” Procedia Computer Science 7: 39–42.



Artificial Intelligence - Algorithmic Composition And Generative Music.

 


A composer's approach for producing new musical material by following a preset limited set of rules or procedures is known as algorithmic composition.

In place of normal musical notation, the algorithm might instead be a set of instructions defined by the composer for the performer to follow throughout a performance. 

According to one school of thinking, algorithmic composition should include as little human intervention as possible.

In music, AI systems based on generative grammar, knowledge-based systems, genetic algorithms, and, more recently, deep learning-trained artificial neural networks have all been used.

The employment of algorithms to assist in the development of music is far from novel.

Several thousand-year-old music theory treatises provide early examples.

These treatises compiled lists of common-practice rules and conventions that composers followed in order to write music correctly.

Johann Joseph Fux's Gradus ad Parnassum (1725), which describes the precise rules defining species counter point, is an early example of algorithmic composition.

Species counterpoint presented five techniques of composing complimentary musical harmony lines against the primary or fixed melody, which was meant as an instructional tool.

Fux's technique gives limited flexibility from the specified rules if followed to the letter.

Chance was often used in early instances of algorithmic composition with little human intervention.

Chance music, often known as aleatoric music, dates back to the Renaissance.

Mozart is credited with the most renowned early example of the technique.

The usage of "Musikalisches Würfelspiel" (musical dice game) is included in a published manuscript claimed to Mozart dated 1787.

In order to put together a 16-bar waltz, the performer must roll the dice to choose one-bar parts of precomposed music (out of a possible 176) at random.



John Cage, an American composer, took these early aleatoric approaches to a new level by composing a work in which the bulk of the composition was determined by chance.

In the musical dice game, chance is only allowed to affect the sequence of brief pre-composed musical snippets, but in his 1951 work Music of Changes, chance is allowed to govern almost all choices.

To decide all musical judgments, Cage consulted the ancient Chinese divi nation scripture I Ching (The Book of Changes).

For playability considerations, his friend David Tudor, the work's performer, had to convert his highly explicit and intricate score into something closer to conventional notation.

This demo shows two types of aleatoric music: one in which the composer uses random processes to generate a set score, and the other in which the sequence of the musical pieces is left to the performer or chance.

Arnold Schoenberg created a twelve-tone algorithmic composition process that is closely related to fields of mathematics like combinatorics and group theory.

Twelve-tone composition is an early form of serialism in which each of the twelve tones of traditional western music is given equal weight.

After placing each tone in a chosen row with no repeated pitches, the row is rotated by one at a time until a 12 12 matrix is formed.

The matrix contains all variants of the original tone row that the composer may use for pitch material.



A fresh row may be employed once the aggregate—that is, all of the pitches from one row—has been included into the score.

Instead of writing melodic lines, the rows may be further separated into subsets to provide harmonic content (a vertical collection of sounds) (horizontal setting).

Later composers like Pierre Boulez and Karlheinz Stockhausen experimented with serializing additional musical aspects by building matrices that included dynamics and timbre.

Some algorithmic composing approaches were created in response to serialist composers' rejection or modification of previous techniques.

Serialist composers, according to Iannis Xena kis, were excessively concentrated on harmony as a succession of interconnecting linear objects (the establishment of linear tone-rows), and the procedures grew too difficult for the listener to understand.

He presented new ways to adapt nonmusical algorithms for music creation that might work with dense sound masses.

The strategy, according to Xenakis, liberated music from its linear concerns.

He was motivated by scientific studies of natural and social events such as moving particles in a cloud or thousands of people assembled at a political rally, and he focused his compositions on the application of probability theory and stochastic processes.

Xenakis, for example, used Markov chains to manipulate musical elements like pitch, timbre, and dynamics to gradually build thick-textured sound masses over time.

The likelihood of the next happening event is largely influenced by previous occurrences in a Markov chain; hence, his use of algorithms mixed indeterminate aspects like those in Cage's chance music with deterministic elements like serialism.

This song was dubbed stochastic music by him.

It prompted a new generation of composers to incorporate more complicated algorithms into their work.

Calculations for these composers ultimately necessitated the use of computers.

Xenakis was a forerunner in the use of computers in music, using them to assist in the calculation of the outcomes of his stochastic and probabilistic procedures.

With his album Ambient 1: Music for Airports, Brian Eno popularized ambient music by building on composer Erik Satie's notion of background music involving live performers (known as furniture music) (1978).

The lengths of seven tape recorders, each of which held a distinct pitch, were all different.

With each loop, the pitches were in a new sequence, creating a melody that was always shifting.

The composition always develops in the same manner each time it is performed since the inputs are the same.




Eno invented the phrase "generative music" in 1995 to describe systems that generate constantly changing music by adjusting parameters over time.

Ambient and generative music are both forerunners of autonomous computer-based algorithmic creation, most of which now uses artificial intelligence techniques.

Noam Chomsky and his collaborators invented generative grammar, which is a set of principles for describing natural languages.

The rules define a range of potential serial orderings of items by rewriting hierarchically structured elements.

Generative grammars, which have been adapted for algorithmic composition, may be used to generate musical sections.

Experiments in Musical Intelligence (1996) by David Cope is possibly the best-known use of generative grammar.

Cope taught his program to produce music in the styles of a variety of composers, including Bach, Mozart, and Chopin.

Information about the genre of music the composer desires to replicate is encoded as a database of facts that may be used to develop an artificial expert to aid the composer in knowledge-based systems.

Genetic algorithms are a kind of composition that mimics the process of biological evolution.

The similarity of a population of randomly made compositions to the intended musical output is examined.

Then, based on natural causes, artificial methods are applied to improve the likelihood of musically attractive qualities increasing in following generations.

The composer interacts with the system, stimulating new ideas in both the computer and the spectator.

Deep learning systems like generative adversarial networks, or GANs, are used in more contemporary AI-generated composition methodologies.

In music, generative adversarial networks pit a generator—which makes new music based on compositional style knowledge—against a discriminator, which tries to tell the difference between the generator's output and that of a human composer.

When the generator fails, the discriminator gets more information until it can no longer distinguish between genuine and created musical content.

Music is rapidly being driven in new and fascinating ways by the repurposing of non-musical algorithms for musical purposes.


Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


Computational Creativity.


Further Reading:

 

Cope, David. 1996. Experiments in Musical Intelligence. Madison, WI: A-R Editions.

Eigenfeldt, Arne. 2011. “Towards a Generative Electronica: A Progress Report.” eContact! 14, no. 4: n.p. https://econtact.ca/14_4/index.html.

Eno, Brian. 1996. “Evolving Metaphors, in My Opinion, Is What Artists Do.” In Motion Magazine, June 8, 1996. https://inmotionmagazine.com/eno1.html.

Nierhaus, Gerhard. 2009. Algorithmic Composition: Paradigms of Automated Music Generation. New York: Springer.

Parviainen, Tero. “How Generative Music Works: A Perspective.” http://teropa.info/loop/#/title.








Analog Space Missions: Earth-Bound Training for Cosmic Exploration

What are Analog Space Missions? Analog space missions are a unique approach to space exploration, involving the simulation of extraterrestri...