Artificial Intelligence - The General Problem Solver Software.

     




    To arrive at a solution, General Problem Solver is a software for a problem-solving method that employs means-ends analysis and planning.





    The software was created in such a way that the problem-solving process is separated from information unique to the situation at hand, allowing it to be used to a wide range of issues.

    The software, which was first created by Allen Newell and Herbert Simon in 1957, took over a decade to complete.

    George W. Ernst, a graduate student at Newell, wrote the latest edition while doing research for his dissertation in 1966.



    The General Problem Solver arose from Newell and Simon's work on the Logic Theorist, another problem-solving tool.




    The duo likened Logic Theorist's problem-solving approach to that of people solving comparable issues after inventing it.


    They discovered that the logic theorist's method varied significantly from that of humans.

    Newell and Simon developed General Problem Solver using the knowledge on human problem solving obtained from these experiments, hoping that their artificial intelligence work would contribute to a better understanding of human cognitive processes.

    They discovered that human problem-solvers could look at the intended outcome and, using both backward and forward thinking, decide actions they might take to get closer to that outcome, resulting in the development of a solution.




    The General Problem Solver, which Newell and Simon felt was not just reflective of artificial intelligence but also a theory of human cognition, included this mechanism.



    To solve issues, General Problem Solver uses two heuristic techniques: 

    1. means-ends analysis and 
    2. planning.



    As an example of means-ends analysis in action, consider the following: 


    • If a person coveted a certain book, they would want to be in possession of it.
    • The book is currently kept by the library, and they do not have it in their possession.
    • The individual has the ability to eliminate the gap between their existing and ideal states.
    • They may do so by borrowing the book from the library, and they have other alternatives for getting there, including driving.
    • If the book has been checked out by another customer, however, there are possibilities for obtaining it elsewhere.
    • To buy it, the consumer may go to a bookshop or order it online.
    • The individual must next consider the many possibilities open to them.
    • And so on.


    The individual is aware of a number of pertinent activities they may do, and if they select the right ones and carry them out in the right sequence, they will be able to receive the book.


    Means ends analysis in action is the person who chooses and implements suitable activities.





    The programmer sets up the issue as a starting state and a state to be attained when using means-ends analysis with General Problem Solver.


    The difference between these two states is calculated using the General Problem Solver (called objects).


    • Operators that lessen the difference between the two states must also be coded into the General Problem Solver.
    • It picks and implements an operator to address the issue, then assesses if the operation has got it closer to its objective or ideal state.
    • If that's the case, it'll go on to the next operator.
    • If it doesn't work, it may go back and try another operator.
    • The difference between the original state and the target state is decreased to zero by applying operators.
    • The capacity to plan was also held by General Problem Solver.



    General Problem Solver might sketch a solution to the problem by removing the specifics of the operators and the difference between the starting and desired states.


    After a broad solution had been defined, the specifics could be reinserted into the issue, and the subproblems formed by these details could be addressed within the solution guide lines produced during the outlining step.

    Defining an issue and operators to program the General Problem Solver was a time-consuming task for programmers.

    It also meant that, as a theory of human cognition or an example of artificial intelligence, General Problem Solver took for granted the very actions that, in part, represent intelligence, namely the acts of defining a problem and selecting relevant actions (or operations) from an infinite number of possible actions in order to solve the problem.



    In the mid-1960s, Ernst continued to work on General Problem Solver.


    He wasn't interested in human problem-solving procedures; instead, he wanted to discover a way to broaden the scope of General Problem Solver so that it could solve issues outside of the logic domain.

    In his version of General Problem Solver, the intended state or object was expressed as a set of constraints rather than an explicit specification.

    Ernst also altered the form of the operators such that the output of an operator may be written as a function of the starting state or object (the input).

    His updated General Problem Solver was only somewhat successful in solving problems.

    Even on basic situations, it often ran out of memory.


    "We do believe that this specific aggregation of IPL-Vcode should be set to rest, as having done its bit in furthering our knowledge of the mechanics of intelligence," Ernst and Newell proclaimed in the foreword of their 1969 book GPS: A Case Study in Generality and Problem Solving(Ernst and Newell 1969, vii).



    Artificial Intelligence Problem Solving




    The AI reflex agent converts states into actions. 

    When these agents fail to function in an environment where the state of mapping is too vast for the agent to handle, the stated issue is resolved and passed to a problem-solving domain, which divides the huge stored problem into smaller storage areas and resolves them one by one. 

    The targeted objectives will be the final integrated action.


    Different sorts of issue-solving agents are created and used at an atomic level without any internal state observable with a problem-solving algorithm based on the problem and their working area. 


    By describing issues and many solutions, the problem-solving agent executes exactly. 

    So we may say that issue solving is a subset of artificial intelligence that includes a variety of problem-solving approaches such as tree, B-tree, and heuristic algorithms.


    A problem-solving agent is also known as a goal-oriented agent since it is constantly focused on achieving the desired outcomes.


    AI problem-solving steps: 


    The nature of people and their behaviors are intimately related to the AI dilemma. 


    To solve a problem, we require a set of discrete steps, which makes human labor simple. These are the actions that must be taken to fix a problem:


    • Goal formulation is the first and most basic stage in addressing a problem. 

    It arranges discrete stages to establish a target/goals that need some action in order to be achieved. 

    AI agents are now used to formulate the aim. 


    One of the most important processes in issue resolution is problem formulation, which determines what actions should be followed to reach the specified objective. 

    This essential aspect of AI relies on a software agent, which consists of the following components to construct the issue. 


    Components needed to formulate the problem: 

    This state necessitates a beginning state for the challenge, which directs the AI agent toward a certain objective. 


    In this scenario, new methods also use a particular class to solve the issue area. 


    ActionIn this step of issue formulation, all feasible actions are performed using a function with a specified class obtained from the starting state.

    Transition: In this step of issue formulation, the actual action performed by the previous action stage is combined with the final stage to be sent to the next stage.

    Objective test: This step assesses if the integrated transition model accomplished the given goal or not; if it did, halt the action and go to the next stage to estimate the cost of achieving the goal.

    Path costing is a component of problem-solving that assigns a numerical value to the expense of achieving the objective. 

    It necessitates the purchase of all gear, software, and human labor.



    General Problem Solver Overview


    In theory, GPS may solve any issue that can be described as a collection of well-formed formulas (WFFs) or Horn clauses that create a directed graph with one or more sources (that is, axioms) and sinks (that is, desired conclusions). 


    Predicate logic proofs and Euclidean geometry problem spaces are two primary examples of the domain in which GPS may be used. 

    It was based on the theoretical work on logic machines by Simon and Newell. 

    GPS was the first computer software to segregate its issue knowledge (expressed as input data) from its problem-solving method (a generic solver engine). 


    IPL, a third-order programming language, was used to build GPS. 


    While GPS was able to tackle small issues like the Hanoi Towers that could be adequately described, it was unable to handle any real-world problems since search was quickly lost in the combinatorial explosion. 

    Alternatively, the number of "walks" across the inferential digraph became computationally prohibitive. 

    (In fact, even a simple state space search like the Towers of Hanoi may become computationally infeasible, however smart pruning of the state space can be accomplished using basic AI methods like A* and IDA*.)



    In order to solve issues, the user identified objects and procedures that might be performed on them, and GPS created heuristics via means-ends analysis. 


    • It concentrated on the available processes, determining which inputs and outputs were acceptable. 
    • It then established sub goals in order to move closer to the ultimate objective.
    • The GPS concept ultimately developed into the Soar artificial intelligence framework.



    Jai Krishna Ponnappan


    You may also want to read more about Artificial Intelligence here.


    See also: 

    Expert Systems; Simon, Herbert A.



    Frequently Asked Questions:


    In Artificial Intelligence, what is a generic problem solver?

    Herbert Simon, J.C. Shaw, and Allen Newell suggested the General Problem Solver (GPS) as an AI software. It was the first useful computer software in the field of artificial intelligence. It was intended to function as a global problem-solving machine.


    What is the procedure for using the generic issue solver?

    In order to solve issues, the user identified objects and procedures that might be performed on them, and GPS created heuristics via means-ends analysis. It concentrated on the available processes, determining which inputs and outputs were acceptable.


    What exactly did the General Problem Solver accomplish?

    The General Problem Solver (GPS) was their next effort, which debuted in 1957. GPS would use heuristic approaches (modifiable "rules of thumb") repeatedly to a problem and then undertake a "means-ends" analysis at each stage to see whether it was getting closer to the intended answer.


    What are the three key domain universal issue-solving heuristics that Newell and Simon's general problem solver incorporated in 1972?

    According to Newell and Simon (1972), every issue has a problem space that is described by three components: 1) the issue's beginning state; 2) a collection of operators for transforming a problem state; 3) a test to see whether a problem state is a solution.


    What is heuristic search and how does it work?

    Heuristic search is a kind of strategy for finding the best solution to a problem by searching a solution space. The heuristic here use some mechanism for searching the solution space while determining where the solution is most likely to be found and concentrating the search on that region.


    What are the elements of a broad issue?

    The issue itself, articulated clearly and with sufficient context to explain why it is significant; the way of fixing the problem, frequently presented as a claim or a working thesis; and the goal, declaration of objective, and scope of the paper the writer is writing.


    What are the stages of a basic development process employing a problem-solving approach?

    Problem-Solving Process in 8 Steps:

    Step 1: Identify the issue. What exactly is the issue?

    Step 2: Identify the issue.

    Step 3: Establish the objectives.

    Step 4: Determine the problem's root cause.

    Step 5: Make a plan of action.

    Step 6: Put your plan into action.

    Step 7: Assess the Outcomes

    Step 8: Always strive to improve.

     

    What's the difference between heuristic and algorithmic problem-solving?

    A step-by-step technique for addressing a given issue in a limited number of steps is known as an algorithm. Given the same parameters, an algorithm's outcome (output) is predictable and repeatable (input). A heuristic is an informed assumption that serves as a starting point for further investigation.


    What makes algorithms superior than heuristics?

    Heuristics entail using a learning and discovery strategy to obtain a solution, while an algorithm is a clearly defined set of instructions for solving a problem. Use an algorithm if you know how to solve an issue.



    References and Further Reading:


    Barr, Avron, and Edward Feigenbaum, eds. 1981. The Handbook of Artificial Intelligence, vol. 1, 113–18. Stanford, CA: HeurisTech Press.

    Ernst, George W., and Allen Newell. 1969. GPS: A Case Study in Generality and Problem Solving. New York: Academic Press.

    Newell, Allen, J. C. Shaw, and Herbert A. Simon. 1960. “Report on a General Problem Solving Program.” In Proceedings of the International Conference on Information Processing (June 15–20, 1959), 256–64. Paris: UNESCO.

    Simon, Herbert A. 1991. Models of My Life. New York: Basic Books.

    Simon, Herbert A., and Allen Newell. 1961. “Computer Simulation of Human Thinking and Problem Solving.” Datamation 7, part 1 (June): 18–20, and part 2 (July): 35–37



    Artificial Intelligence - General and Narrow Categories Of AI.






    There are two types of artificial intelligence: general (or powerful or complete) and narrow (or limited) (or weak or specialized).

    In the actual world, general AI, such as that seen in science fiction, does not yet exist.

    Machines with global intelligence would be capable of completing every intellectual endeavor that humans are capable of.

    This sort of system would also seem to think in abstract terms, establish connections, and communicate innovative ideas in the same manner that people do, displaying the ability to think abstractly and solve problems.



    Such a computer would be capable of thinking, planning, and recalling information from the past.

    While the aim of general AI has yet to be achieved, there are more and more instances of narrow AI.

    These are machines that perform at human (or even superhuman) levels on certain tasks.

    Computers that have learnt to play complicated games have abilities, techniques, and behaviors that are comparable to, if not superior to, those of the most skilled human players.

    AI systems that can translate between languages in real time, interpret and respond to natural speech (both spoken and written), and recognize images have also been developed (being able to recognize, identify, and sort photos or images based on the content).

    However, the ability to generalize knowledge or skills is still largely a human accomplishment.

    Nonetheless, there is a lot of work being done in the field of general AI right now.

    It will be difficult to determine when a computer develops human-level intelligence.

    Several serious and hilarious tests have been suggested to determine whether a computer has reached the level of General AI.

    The Turing Test is arguably the most renowned of these examinations.

    A machine and a person speak in the background, as another human listens in.

    The human eavesdropper must figure out which speaker is a machine and which is a human.

    The machine passes the test if it can fool the human evaluator a prescribed percentage of the time.

    The Coffee Test is a more fantastical test in which a machine enters a typical household and brews coffee.



    It has to find the coffee machine, look for the coffee, add water, boil the coffee, and pour it into a cup.

    Another is the Flat Pack Furniture Test, which involves a machine receiving, unpacking, and assembling a piece of furniture based only on the instructions supplied.

    Some scientists, as well as many science fiction writers and fans, believe that once intelligent machines reach a tipping point, they will be able to improve exponentially.

    AI-based beings that far exceed human capabilities might be one conceivable result.

    The Singularity, or artificial superintelligence, is the point at which AI assumes control of its own self-improvement (ASI).

    If ASI is achieved, it will have unforeseeable consequences for human society.

    Some pundits worry that ASI would jeopardize humanity's safety and dignity.

    It's up for dispute whether the Singularity will ever happen, and how dangerous it may be.

    Narrow AI applications are becoming more popular across the globe.

    Machine learning (ML) is at the heart of most new applications, and most AI examples in the news are connected to this subset of technology.

    Traditional or conventional algorithms are not the same as machine learning programs.

    In programs that cannot learn, a computer programmer actively adds code to account for every action of an algorithm.

    All of the decisions made along the process are governed by the programmer's guidelines.

    This necessitates the programmer imagining and coding for every possible circumstance that an algorithm may face.

    This kind of program code is bulky and often inadequate, especially if it is updated frequently to accommodate for new or unanticipated scenarios.

    The utility of hard-coded algorithms approaches its limit in cases where the criteria for optimum judgments are unclear or impossible for a human programmer to foresee.

    Machine learning is the process of training a computer to detect and identify patterns via examples rather than predefined rules.



    This is achieved, according to Google engineer Jason Mayes, by reviewing incredibly huge quantities of training data or participating in some other kind of programmed learning step.

    New patterns may be extracted by processing the test data.

    The system may then classify newly unknown data based on the patterns it has already found.

    Machine learning allows an algorithm to recognize patterns or rules underlying decision-making processes on its own.

    Machine learning also allows a system's output to improve over time as it gains more experience (Mayes 2017).

    A human programmer continues to play a vital role in this learning process, influencing results by making choices like developing the exact learning algorithm, selecting the training data, and choosing other design elements and settings.

    Machine learning is powerful once it's up and running because it can adapt and enhance its ability to categorize new data without the need for direct human interaction.

    In other words, the output quality increases as the user gains experience.

    Artificial intelligence is a broad word that refers to the science of making computers intelligent.

    AI is a computer system that can collect data and utilize it to make judgments or solve issues, according to scientists.

    Another popular scientific definition of AI is "a software program paired with hardware that can receive (or sense) inputs from the world around it, evaluate and analyze those inputs, and create outputs and suggestions without the assistance of a person." When programmers claim an AI system can learn, they're referring to the program's ability to change its own processes in order to provide more accurate outputs or predictions.

    AI-based systems are now being developed and used in practically every industry, from agriculture to space exploration, and in applications ranging from law enforcement to online banking.

    The methods and techniques used in computer science are always evolving, extending, and improving.

    Other terminology linked to machine learning, such as reinforcement learning and neural networks, are important components of cutting-edge artificial intelligence systems.


    Jai Krishna Ponnappan


    You may also want to read more about Artificial Intelligence here.



    See also: 

    Embodiment, AI and; Superintelligence; Turing, Alan; Turing Test.


    Further Reading:


    Kelnar, David. 2016. “The Fourth Industrial Revolution: A Primer on Artificial Intelligence (AI).” Medium, December 2, 2016. https://medium.com/mmc-writes/the-fourth-industrial-revolution-a-primer-on-artificial-intelligence-ai-ff5e7fffcae1.

    Kurzweil, Ray. 2005. The Singularity Is Near: When Humans Transcend Biology. New York: Viking.

    Mayes, Jason. 2017. Machine Learning 101. https://docs.google.com/presentation/d/1kSuQyW5DTnkVaZEjGYCkfOxvzCqGEFzWBy4e9Uedd9k/htmlpresent.

    Müller, Vincent C., and Nick Bostrom. 2016. “Future Progress in Artificial Intelligence: A Survey of Expert Opinion.” In Fundamental Issues of Artificial Intelligence, edited by Vincent C. Müller, 553–71. New York: Springer.

    Russell, Stuart, and Peter Norvig. 2003. Artificial Intelligence: A Modern Approach. Englewood Cliffs, NJ: Prentice Hall.

    Samuel, Arthur L. 1988. “Some Studies in Machine Learning Using the Game of Checkers I.” In Computer Games I, 335–65. New York: Springer.



    Artificial Intelligence - The Frame Problem.

     




     John McCarthy and Patrick Hayes discovered the frame problem in 1969.

    The problem concerns representing the effects of actions in logic-based artificial intelligence.

    Formal logic is used to define facts about the world, such as a car can be started when the key is placed in the ignition and turned and that pressing the accelerator causes it to move forward.

    However, the latter fact does not explicitly state that the car remains on after pressing the accelerator.

    To correct this, the fact must be expanded to “pressing the accelerator moves the car forward and does not turn it off.” However, this fact must be augmented further to describe many other scenarios (e.g., that the driver also remains in the vehicle) (e.g., that the driver also remains in the vehicle).

    The frame problem highlights an issue in logic involving the construction of facts that do not require enumerating thousands of trivial effects.

    After its discovery by artificial intelligence researchers, the frame issue was taken up by philosophers.





    Their version of the issue could be appropriately dubbed the world update problem since it involves updating frames of reference.

    For example, how do you know your dog (or other pet) is where you last saw them without seeing them again? In a philosophic perspective, the frame issue is concerned with how well a person's perception of their environment corresponds to reality and when that understanding should be changed.

    As intelligent agents plan activities in more complicated environments, they will have to deal with this issue.

    To solve the logic version of the frame problem, a number of solutions have been proposed.

    The philosophical problem, on the other hand, remains unsolved.

    Both must be solved in order for artificial intelligence to behave intelligently.







    Jai Krishna Ponnappan


    You may also want to read more about Artificial Intelligence here.


    See also: 

    McCarthy, John.


    Further Reading:


    McCarthy, John, and Patrick J. Hayes. 1969. “Some Philosophical Problems from the Standpoint of Artificial Intelligence.” In Machine Intelligence, vol. 4, edited by Donald Michie and Bernard Meltzer, 463–502. Edinburgh, UK: Edinburgh University Press.

    Shanahan, Murray. 1997. Solving the Frame Problem: A Mathematical Investigation of  the Common Sense Law of Inertia. Cambridge, MA: MIT Press.

    Shanahan, Murray. 2016. “The Frame Problem.” In The Stanford Encyclopedia of Philosophy, edited by Edward N. Zalta. https://plato.stanford.edu/entries/frame-problem.






    Artificial Intelligence - Generative Design.

     



    Any iterative rule-based technique used to develop several choices that fulfill a stated set of objectives and constraints is referred to as generative design.

    The end result of such a process may be anything from complicated architectural models to works of art, and it could be used in a number of industries, including architecture, art, engineering, and product design, to mention a few.

    A more conventional design technique involves evaluating a very small number of possibilities before selecting one to develop into a finished product.

    The justification for utilizing a generative design framework is that the end aim of a project is not always known at the start.

    As a result, the goal should not be to come up with a single proper solution to an issue, but rather to come up with a variety of feasible choices that all meet the requirements.

    Using a computer's processing capacity, multiple variations of a solution may be quickly created and analyzed, much more quickly than a person could.

    As the designer/aims user's and overall vision become clearer, the input parameters are fine-tuned to refine the solution space.

    This avoids the problem of being locked into a single solution too early in the design phase, allowing for creative exploration of a broad variety of possibilities.

    The expectation is that by doing so, the odds of achieving a result that best meets the defined design requirements will increase.

    It's worth noting that generative design doesn't have to be a digital process; an iterative approach might be created in a physical environment.

    However, since a computer's processing capacity (i.e., the quantity and speed of calculations) greatly exceeds that of a person, generative design approaches are often equated with digital techniques.

    The creative process is being aided by digital technologies, particularly artificial intelligence-based solutions.

    Generative art and computational design in architecture are two examples of artificial intelligence applications.

    The term "generative art," often known as "computer art," refers to artwork created in part with the help of a self-contained digital system.

    Decisions that would normally be made by a human artist are delegated to an automated procedure in whole or in part.

    Instead, by describing the inputs and rule sets to be followed, the artist generally maintains some influence over the process.

    Georg Nees, Frieder Nake, and A. Michael Noll are usually acknowledged as the inventors of visual computer art.

    The "3N" group of computer pioneers is sometimes referred to as a unit.

    Georg Nees is widely credited with the founding of the first generative art exhibition, Computer Graphic, in Stuttgart in 1965.

    In the same year, exhibitions by Nake (in cooperation with Nees) and Noll were held in Stuttgart and New York City, respectively (Boden and Edmonds 2009).

    In their use of computers to generate works of art, these early examples of generative art in the visual media are groundbreaking.

    They were also constrained by the existing research methodologies at the time.

    In today's world, the availability of AI-based technology, along with exponential advances in processing power, has resulted in the emergence of new forms of generative art.

    Computational creativity, described as "a discipline of artificial intelligence focused on designing agents that make creative goods autonomously," is an intriguing subset of these new efforts (Davis et al. 2016).

    When it comes to generative art, the purpose of computational creativity is to use machine learning methods to tap into a computer's creative potential.

    In this approach, the creativity process shifts away from giving a computer step-by-step instructions (as was the case in the early days) and toward more abstract procedures with unpredictable outputs.

    The DeepDream computer vision software, invented by Google developer Alexander Mordvintsev in 2015, is a modern example of computational innovation.

    A convolutional neural network is used in this project to purposefully over-process a picture.

    This brings forward patterns that correspond to how a certain layer in the network interprets an input picture based on the image types it has been taught to recognize.

    The end effect is psychedelic reinterpretations of the original picture, comparable to what one may see in a restless night's sleep.

    Mordvintsev demonstrates how a neural network trained on a set of animals can take images of clouds and convert them into rough animal representations that match the detected features.

    Using a different training set, the network would transform elements like horizon lines and towering vertical structures into squiggly representations of skyscrapers and buildings.

    As a result, these new pictures might be regarded unexpected unique pieces of art made entirely by the computer's own creative process based on a neural network.

    Another contemporary example of computational creativity is My Artificial Muse.

    Unlike DeepDream, which depends entirely on a neural network to create art, Artificial Muse investigates how an AI-based method might cooperate with a human to inspire new paintings (Barqué-Duran et al. 2018).

    The neural network is trained using a massive collection of human postures culled from existing photos and rendered as stick figures.

    The data is then used to build an entirely new position, which is then given back into the algorithm, which reconstructs what it believes a painting based on this stance should look like.

    As a result, the new stance might be seen as a muse for the algorithm, inspiring it to produce an entirely unique picture, which is subsequently executed by the artist.

    Two-dimensional computer-aided drafting (CAD) systems were the first to integrate computers into the field of architecture, and they were used to directly imitate the job of hand sketching.

    Although using a computer to create drawings was still a manual process, it was seen to be an advance over the analogue method since it allowed for more accuracy and reproducibility.

    More complicated parametric design software, which takes a more programmed approach to the construction of an architectural model, soon exceeded these rudimentary CAD applications (i.e., geometry is created through user-specified variables).

    Today, the most popular platform for this sort of work is Grasshopper (a plugin for the three-dimensional computer-aided design software Rhino), which was created by David Rutten in 2007 while working at Robert McNeel & Associates.

    Take, for example, defining a rectangle, which is a pretty straightforward geometric problem.

    The length and breadth values would be created as user-controlled parameters in a parametric modeling technique.

    The program would automatically change the final design (i.e., the rectangle drawing) based on the parameter values provided.

    Imagine this on a bigger scale, where a set of parameters connects a complicated collection of geometric representations (e.g., curves, surfaces, planes, etc.).

    As a consequence, basic user-specified parameters may be used to determine the output of a complicated geometric design.

    An further advantage is that parameters interact in unexpected ways, resulting in results that a creator would not have imagined.

    Despite the fact that parametric design uses a computer to produce and display complicated results, the process is still manual.

    A set of parameters must be specified and controlled by a person.

    The computer or program that performs the design computations is given more agency in generative design methodologies.

    Neural networks may be trained on examples of designs that meet a project's general aims, and then used to create multiple design proposals using fresh input data.

    A recent example of generative design in an architectural environment is the layout of the new Autodesk headquarters in Toronto's MaRS Innovation District (Autodesk 2016).

    Existing workers were polled as part of this initiative, and data was collected on six quantifiable goals: work style preference, adjacency preference, degree of distraction, interconnection, daylight, and views to the outside.

    All of these requirements were taken into account by the generative design algorithm, which generated numerous office arrangements that met or exceeded the stated standards.

    These findings were analyzed, and the highest-scoring ones were utilized to design the new workplace arrangement.

    In this approach, a huge quantity of data was utilized to build a final optimal design, including prior projects and user-specified data.

    The data linkages would have been too complicated for a person to comprehend, and could only be fully explored through a generative design technique.

    In a broad variety of applications where a designer wants to explore a big solution area, generative design techniques have shown to be beneficial.

    It avoids the issue of concentrating on a single solution too early in the design phase by allowing for creative explorations of a variety of possibilities.

    As AI-based computational approaches develop, generative design will find new uses.


    Jai Krishna Ponnappan


    You may also want to read more about Artificial Intelligence here.



    See also: 

    Computational Creativity.


    Further Reading:


    Autodesk. 2016. “Autodesk @ MaRS.” Autodesk Research. https://www.autodeskresearch.com/projects/autodesk-mars.

    Barqué-Duran, Albert, Mario Klingemann, and Marc Marzenit. 2018. “My Artificial Muse.” https://albertbarque.com/myartificialmuse.

    Boden, Margaret A., and Ernest A. Edmonds. 2009. “What Is Generative Art?” Digital Creativity 20, no. 1–2: 21–46.

    Davis, Nicholas, Chih-Pin Hsiao, Kunwar Yashraj Singh, Lisa Li, and Brian Magerko. 2016. “Empirically Studying Participatory Sense-Making in Abstract Drawing with a Co-Creative Cognitive Agent.” In Proceedings of the 21st International Conference on Intelligent User Interfaces—IUI ’16, 196–207. Sonoma, CA: ACM Press.

    Menges, Achim, and Sean Ahlquist, eds. 2011. Computational Design Thinking: Computation Design Thinking. Chichester, UK: J. Wiley & Sons.

    Mordvintsev, Alexander, Christopher Olah, and Mike Tyka. 2015. “Inceptionism: Going Deeper into Neural Networks.” Google Research Blog. https://web.archive.org/web/20150708233542/http://googleresearch.blogspot.com/2015/06/inceptionism-going-deeper-into-neural.html.

    Nagy, Danil, and Lorenzo Villaggi. 2017. “Generative Design for Architectural Space Planning.” https://www.autodesk.com/autodesk-university/article/Generative-Design-Architectural-Space-Planning-2019.

    Picon, Antoine. 2010. Digital Culture in Architecture: An Introduction for the Design Professions. Basel, Switzerland: Birkhäuser Architecture.

    Rutten, David. 2007. “Grasshopper: Algorithmic Modeling for Rhino.” https://www.grasshopper3d.com/.





    Artificial Intelligence - Gender and Artificial Intelligence.

     



    Artificial intelligence and robots are often thought to be sexless and genderless in today's society, but this is not the case.

    Humans, on the other hand, encode gender and stereo types into artificial intelligence systems in a similar way that gender is woven into language and culture.

    The data used to train artificial intelligences has a gender bias.

    Biased data may cause significant discrepancies in computer predictions and conclusions.

    These differences would be said to be discriminating in humans.

    AIs are only as good as the people who provide the data that machine learning systems capture, and they are only as ethical as the programmers who create and supervise them.

    Machines presume gender prejudice is normal (if not acceptable) human behavior when individuals exhibit it.

    When utilizing numbers, text, graphics, or voice recordings to teach algorithms, bias might emerge.

    Machine learning is the use of statistical models to evaluate and categorize large amounts of data in order to generate predictions.

    Deep learning is the use of neural network topologies that are expected to imitate human brainpower.

    Data is labeled using classifiers based on previous patterns.

    Classifiers have a lot of power.

    By studying data from automobiles visible in Google Street View, they can precisely forecast income levels and political leanings of neighborhoods and cities.

    The language individuals employ reveals gender prejudice.

    This bias may be apparent in the names of items as well as how they are ranked in significance.

    Beginning with the frequency with which their respective titles are employed and they are referred to as men and women vs boys and girls, descriptions of men and women are skewed.

    The analogies and words employed are skewed as well.

    Biased AI may influence whether or not individuals of particular genders or ethnicities are targeted for certain occupations, whether or not medical diagnoses are correct, whether or not they are able to acquire loans, and even how exams are scored.

    "Woman" and "girl" are more often associated with the arts than with mathematics in AI systems.

    Similar biases have been discovered in Google's AI systems for finding employment prospects.



    Facebook and Microsoft's algorithms regularly correlate pictures of cooking and shopping with female activity, whereas sports and hunting are associated with masculine activity.

    Researchers have discovered instances when gender prejudices are purposefully included into AI systems.

    Men, for example, are more often provided opportunities to apply for highly paid and sought-after positions on job sites than women.

    Female-sounding names for digital assistants on smartphones include Siri, Alexa, and Cortana.

    According to Alexa's creator, the name came from negotiations with Amazon CEO Jeff Bezos, who desired a virtual assistant with the attitude and gender of the Enterprise starship computer from the Star Trek television program, which is a woman.

    Debo rah Harrison, the Cortana project's head, claims that their female voice arose from studies demonstrating that people react better to female voices.

    However, when BMW introduced a female voice to its in-car GPS route planner, it experienced instant backlash from males who didn't want their vehicles to tell them what to do.

    Female voices should seem empathic and trustworthy, but not authoritative, according to the company.

    Affectiva, a startup that specializes in artificial intelligence, utilizes photographs of six million people's faces as training data to attempt to identify their underlying emotional states.

    The startup is now collaborating with automakers to utilize real-time footage of drivers to assess whether or not they are weary or furious.

    The automobile would advise these drivers to pull over and take a break.

    However, the organization has discovered that women seem to "laugh more" than males, which complicates efforts to accurately estimate the emotional states of normal drivers.

    In hardware, the same biases might be discovered.

    A disproportionate percentage of female robots are created by computer engineers, who are still mostly male.

    The NASA Valkyrie robot, which has been deployed on Shuttle flights, has breasts.

    Jia, a shockingly human-looking robot created at China's University of Science and Technology, has long wavy black hair, pale complexion, and pink lips and cheeks.

    She maintains her eyes and head inclined down when initially spoken to, as though in reverence.

    She wears a tight gold gown that is slender and busty.

    "Yes, my lord, what can I do for you?" she says as a welcome.

    "Don't get too near to me while you're taking a photo," Jia says when asked to snap a picture.

    It will make my face seem chubby." In popular culture, there is a strong prejudice against female robots.

    Fembots in the 1997 film Austin Powers discharged bullets from their breast cups, weaponizing female sexuality.

    The majority of robots in music videos are female robots.

    Duran Duran's "Electric Barbarella" was the first song accessible for download on the internet.

    Bjork's video "The Girl And The Robot" gave birth to the archetypal white-sheathed robot seen today in so many places.

    Marina and the Diamonds' protest that "I Am Not a Robot" is met by Hoodie Allen's fast answer that "You Are Not a Robot." In "The Ghost Inside," by the Broken Bells, a female robot sacrifices plastic body parts to pay tolls and reclaim paradise.

    The skin of Lenny Kravitz's "Black Velveteen" is titanium.

    Hatsune Miku and Kagamine Rin are anime-inspired holographic vocaloid singers.

    Daft Punk is the notable exception, where robot costumes conceal the genuine identity of the male musicians.

    Sexy robots are the principal love interests in films like Metropolis (1927), The Stepford Wives (1975), Blade Runner (1982), Ex Machina (2014), and Her (2013), as well as television programs like Battlestar Galactica and Westworld.

    Meanwhile, "killer robots," or deadly autonomous weapons systems, are hypermasculine.

    Atlas, Helios, and Titan are examples of rugged military robots developed by the Defense Advanced Research Projects Agency (DARPA).

    Achilles, Black Knight, Overlord, and Thor PRO are some of the names given to self-driving automobiles.

    The HAL 9000 computer implanted in the spacecraft Discovery in 2001: A Space Odyssey (1968), the most renowned autonomous vehicle of all time, is masculine and deadly.

    In the field of artificial intelligence, there is a clear gender disparity.

    The head of the Stanford Artificial Intelligence Lab, Fei-Fei Li, revealed in 2017 that her team was mostly made up of "men in hoodies" (Hempel 2017).

    Women make up just approximately 12% of the researchers who speak at major AI conferences (Simonite 2018b).

    In computer and information sciences, women have 19% of bachelor's degrees and 22% of PhD degrees (NCIS 2018).

    Women now have a lower proportion of bachelor's degrees in computer science than they did in 1984, when they had a peak of 37 percent (Simonite 2018a).

    This is despite the fact that the earliest "computers," as shown in the film Hidden Figures (2016), were women.

    There is significant dispute among philosophers over whether un-situated, gender-neutral knowledge may exist in human society.

    Users projected gender preferences on Google and Apple's unsexed digital assistants even after they were launched.

    White males developed centuries of professional knowledge, which was eventually unleashed into digital realms.

    Will machines be able to build and employ rules based on impartial information for hundreds of years to come? In other words, is there a gender to scientific knowledge? Is it masculine or female? Alison Adam is a Science and Technology Studies researcher who is more concerned in the gender of the ideas created by the participants than the gender of the persons engaged.

    Sage, a British corporation, recently employed a "conversation manager" entrusted with building a gender-neutral digital assistant, which was eventually dubbed "Pegg." To help its programmers, the organization has also formalized "five key principles" in a "ethics of code" paper.

    According to Sage CEO Kriti Sharma, "by 2020, we'll spend more time talking to machines than our own families," thus getting technology right is critical.

    Aether, a Microsoft internal ethics panel for AI and Ethics in Engineering and Research, was recently established.

    Gender Swap is a project that employs a virtual reality system as a platform for embodiment experience, a kind of neuroscience in which users may sense themselves in a new body.

    Human partners utilize the immersive Head Mounted Display Oculus Rift and first-person cameras to generate the brain illusion.

    Both users coordinate their motions to generate this illusion.

    The embodiment experience will not operate if one does not correlate to the movement of the other.

    It implies that every move they make jointly must be agreed upon by both users.

    On a regular basis, new causes of algorithmic gender bias are discovered.

    Joy Buolamwini, an MIT computer science graduate student, discovered gender and racial prejudice in the way AI detected individuals' looks in 2018.

    She discovered, with the help of other researchers, that the dermatologist-approved Fitzpatrick The datasets for Skin Type categorization systems were primarily made up of lighter-skinned people (up to 86 percent).

    The researchers developed a skin type system based on a rebalanced dataset and used it to compare three gender categorization systems available off the shelf.

    They discovered that darker-skinned girls are the most misclassified in all three commercial systems.

    Buolamwini founded the Algorithmic Justice League, a group that fights unfairness in decision-making software.


    Jai Krishna Ponnappan


    You may also want to read more about Artificial Intelligence here.


    See also: 

    Algorithmic Bias and Error; Explainable AI.


    Further Reading:


    Buolamwini, Joy and Timnit Gebru. 2018. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” Proceedings of Machine Learning Research: Conference on Fairness, Accountability, and Transparency 81: 1–15.

    Hempel, Jessi. 2017. “Melinda Gates and Fei-Fei Li Want to Liberate AI from ‘Guys With Hoodies.’” Wired, May 4, 2017. https://www.wired.com/2017/05/melinda-gates-and-fei-fei-li-want-to-liberate-ai-from-guys-with-hoodies/.

    Leavy, Susan. 2018. “Gender Bias in Artificial Intelligence: The Need for Diversity and Gender Theory in Machine Learning.” In GE ’18: Proceedings of the 1st International Workshop on Gender Equality in Software Engineering, 14–16. New York: Association for Computing Machinery.

    National Center for Education Statistics (NCIS). 2018. Digest of Education Statistics. https://nces.ed.gov/programs/digest/d18/tables/dt18_325.35.asp.

    Roff, Heather M. 2016. “Gendering a Warbot: Gender, Sex, and the Implications for the Future of War.” International Feminist Journal of Politics 18, no. 1: 1–18.

    Simonite, Tom. 2018a. “AI Is the Future—But Where Are the Women?” Wired, August 17, 2018. https://www.wired.com/story/artificial-intelligence-researchers-gender-imbalance/.

    Simonite, Tom. 2018b. “AI Researchers Fight Over Four Letters: NIPS.” Wired, October 26, 2018. https://www.wired.com/story/ai-researchers-fight-over-four-letters-nips/.

    Søraa, Roger Andre. 2017. “Mechanical Genders: How Do Humans Gender Robots?” Gender, Technology, and Development 21, no. 1–2: 99–115.

    Wosk, Julie. 2015. My Fair Ladies: Female Robots, Androids, and Other Artificial Eves. New Brunswick, NJ: Rutgers University Press.



    Analog Space Missions: Earth-Bound Training for Cosmic Exploration

    What are Analog Space Missions? Analog space missions are a unique approach to space exploration, involving the simulation of extraterrestri...