Artificial Intelligence - Generative Design.

 



Any iterative rule-based technique used to develop several choices that fulfill a stated set of objectives and constraints is referred to as generative design.

The end result of such a process may be anything from complicated architectural models to works of art, and it could be used in a number of industries, including architecture, art, engineering, and product design, to mention a few.

A more conventional design technique involves evaluating a very small number of possibilities before selecting one to develop into a finished product.

The justification for utilizing a generative design framework is that the end aim of a project is not always known at the start.

As a result, the goal should not be to come up with a single proper solution to an issue, but rather to come up with a variety of feasible choices that all meet the requirements.

Using a computer's processing capacity, multiple variations of a solution may be quickly created and analyzed, much more quickly than a person could.

As the designer/aims user's and overall vision become clearer, the input parameters are fine-tuned to refine the solution space.

This avoids the problem of being locked into a single solution too early in the design phase, allowing for creative exploration of a broad variety of possibilities.

The expectation is that by doing so, the odds of achieving a result that best meets the defined design requirements will increase.

It's worth noting that generative design doesn't have to be a digital process; an iterative approach might be created in a physical environment.

However, since a computer's processing capacity (i.e., the quantity and speed of calculations) greatly exceeds that of a person, generative design approaches are often equated with digital techniques.

The creative process is being aided by digital technologies, particularly artificial intelligence-based solutions.

Generative art and computational design in architecture are two examples of artificial intelligence applications.

The term "generative art," often known as "computer art," refers to artwork created in part with the help of a self-contained digital system.

Decisions that would normally be made by a human artist are delegated to an automated procedure in whole or in part.

Instead, by describing the inputs and rule sets to be followed, the artist generally maintains some influence over the process.

Georg Nees, Frieder Nake, and A. Michael Noll are usually acknowledged as the inventors of visual computer art.

The "3N" group of computer pioneers is sometimes referred to as a unit.

Georg Nees is widely credited with the founding of the first generative art exhibition, Computer Graphic, in Stuttgart in 1965.

In the same year, exhibitions by Nake (in cooperation with Nees) and Noll were held in Stuttgart and New York City, respectively (Boden and Edmonds 2009).

In their use of computers to generate works of art, these early examples of generative art in the visual media are groundbreaking.

They were also constrained by the existing research methodologies at the time.

In today's world, the availability of AI-based technology, along with exponential advances in processing power, has resulted in the emergence of new forms of generative art.

Computational creativity, described as "a discipline of artificial intelligence focused on designing agents that make creative goods autonomously," is an intriguing subset of these new efforts (Davis et al. 2016).

When it comes to generative art, the purpose of computational creativity is to use machine learning methods to tap into a computer's creative potential.

In this approach, the creativity process shifts away from giving a computer step-by-step instructions (as was the case in the early days) and toward more abstract procedures with unpredictable outputs.

The DeepDream computer vision software, invented by Google developer Alexander Mordvintsev in 2015, is a modern example of computational innovation.

A convolutional neural network is used in this project to purposefully over-process a picture.

This brings forward patterns that correspond to how a certain layer in the network interprets an input picture based on the image types it has been taught to recognize.

The end effect is psychedelic reinterpretations of the original picture, comparable to what one may see in a restless night's sleep.

Mordvintsev demonstrates how a neural network trained on a set of animals can take images of clouds and convert them into rough animal representations that match the detected features.

Using a different training set, the network would transform elements like horizon lines and towering vertical structures into squiggly representations of skyscrapers and buildings.

As a result, these new pictures might be regarded unexpected unique pieces of art made entirely by the computer's own creative process based on a neural network.

Another contemporary example of computational creativity is My Artificial Muse.

Unlike DeepDream, which depends entirely on a neural network to create art, Artificial Muse investigates how an AI-based method might cooperate with a human to inspire new paintings (Barqué-Duran et al. 2018).

The neural network is trained using a massive collection of human postures culled from existing photos and rendered as stick figures.

The data is then used to build an entirely new position, which is then given back into the algorithm, which reconstructs what it believes a painting based on this stance should look like.

As a result, the new stance might be seen as a muse for the algorithm, inspiring it to produce an entirely unique picture, which is subsequently executed by the artist.

Two-dimensional computer-aided drafting (CAD) systems were the first to integrate computers into the field of architecture, and they were used to directly imitate the job of hand sketching.

Although using a computer to create drawings was still a manual process, it was seen to be an advance over the analogue method since it allowed for more accuracy and reproducibility.

More complicated parametric design software, which takes a more programmed approach to the construction of an architectural model, soon exceeded these rudimentary CAD applications (i.e., geometry is created through user-specified variables).

Today, the most popular platform for this sort of work is Grasshopper (a plugin for the three-dimensional computer-aided design software Rhino), which was created by David Rutten in 2007 while working at Robert McNeel & Associates.

Take, for example, defining a rectangle, which is a pretty straightforward geometric problem.

The length and breadth values would be created as user-controlled parameters in a parametric modeling technique.

The program would automatically change the final design (i.e., the rectangle drawing) based on the parameter values provided.

Imagine this on a bigger scale, where a set of parameters connects a complicated collection of geometric representations (e.g., curves, surfaces, planes, etc.).

As a consequence, basic user-specified parameters may be used to determine the output of a complicated geometric design.

An further advantage is that parameters interact in unexpected ways, resulting in results that a creator would not have imagined.

Despite the fact that parametric design uses a computer to produce and display complicated results, the process is still manual.

A set of parameters must be specified and controlled by a person.

The computer or program that performs the design computations is given more agency in generative design methodologies.

Neural networks may be trained on examples of designs that meet a project's general aims, and then used to create multiple design proposals using fresh input data.

A recent example of generative design in an architectural environment is the layout of the new Autodesk headquarters in Toronto's MaRS Innovation District (Autodesk 2016).

Existing workers were polled as part of this initiative, and data was collected on six quantifiable goals: work style preference, adjacency preference, degree of distraction, interconnection, daylight, and views to the outside.

All of these requirements were taken into account by the generative design algorithm, which generated numerous office arrangements that met or exceeded the stated standards.

These findings were analyzed, and the highest-scoring ones were utilized to design the new workplace arrangement.

In this approach, a huge quantity of data was utilized to build a final optimal design, including prior projects and user-specified data.

The data linkages would have been too complicated for a person to comprehend, and could only be fully explored through a generative design technique.

In a broad variety of applications where a designer wants to explore a big solution area, generative design techniques have shown to be beneficial.

It avoids the issue of concentrating on a single solution too early in the design phase by allowing for creative explorations of a variety of possibilities.

As AI-based computational approaches develop, generative design will find new uses.


Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 

Computational Creativity.


Further Reading:


Autodesk. 2016. “Autodesk @ MaRS.” Autodesk Research. https://www.autodeskresearch.com/projects/autodesk-mars.

Barqué-Duran, Albert, Mario Klingemann, and Marc Marzenit. 2018. “My Artificial Muse.” https://albertbarque.com/myartificialmuse.

Boden, Margaret A., and Ernest A. Edmonds. 2009. “What Is Generative Art?” Digital Creativity 20, no. 1–2: 21–46.

Davis, Nicholas, Chih-Pin Hsiao, Kunwar Yashraj Singh, Lisa Li, and Brian Magerko. 2016. “Empirically Studying Participatory Sense-Making in Abstract Drawing with a Co-Creative Cognitive Agent.” In Proceedings of the 21st International Conference on Intelligent User Interfaces—IUI ’16, 196–207. Sonoma, CA: ACM Press.

Menges, Achim, and Sean Ahlquist, eds. 2011. Computational Design Thinking: Computation Design Thinking. Chichester, UK: J. Wiley & Sons.

Mordvintsev, Alexander, Christopher Olah, and Mike Tyka. 2015. “Inceptionism: Going Deeper into Neural Networks.” Google Research Blog. https://web.archive.org/web/20150708233542/http://googleresearch.blogspot.com/2015/06/inceptionism-going-deeper-into-neural.html.

Nagy, Danil, and Lorenzo Villaggi. 2017. “Generative Design for Architectural Space Planning.” https://www.autodesk.com/autodesk-university/article/Generative-Design-Architectural-Space-Planning-2019.

Picon, Antoine. 2010. Digital Culture in Architecture: An Introduction for the Design Professions. Basel, Switzerland: Birkhäuser Architecture.

Rutten, David. 2007. “Grasshopper: Algorithmic Modeling for Rhino.” https://www.grasshopper3d.com/.





Analog Space Missions: Earth-Bound Training for Cosmic Exploration

What are Analog Space Missions? Analog space missions are a unique approach to space exploration, involving the simulation of extraterrestri...