If order is digital-only, shipping is free. You receive a confirmation email with a link to your order after purchase.
Essentially, a GAN pits two neural networks against each other in a battle to see which can produce the best images. One network generates new images, while the other critiques them.
This process can be made more efficient and effective by adding a feedback loop between the two networks: the discriminator provides information for the generator in order to improve its output, which provides information for the discriminator about how well it is performing; this feedback loop continues until both networks reach mutual improvement and thus an equilibrium state wherein they are no longer competing against one another.
A common method used by generative adversarial networks (GANs) for generating artworks is known as a conditional GAN (cGAN). A cGAN consists of two separate GANs, one of which produces images based on its inputs while the other creates labels based on its own inputs.
The first GAN, known as an Image Generator (IG), takes in two different sets of information as input: One set identifies a source artwork from which it can pull characteristics such as texture, shape, color, etc, as well as another set that identifies certain aesthetic qualities such as “beautiful” or “ugly” that apply specifically to the assigned image source artwork; this information helps determine how well or poorly an IG will perform at producing new images from its given source artwork.
The second GAN, known as a Discriminator (D), takes in the same two sets of inputs mentioned above but applies them differently than does IG: D determines how close or far away an image produced by IG is from the characteristics/qualities specified by its input sets such that if IG produces an image that meets those characteristics/qualities perfectly then D will rate it 100% whereas if IG produces an image that fails to meet those characteristics/qualities entirely then D will rate it 0%.
IG and D are initialized at random with little-to-no understanding about their respective tasks, but gradually increase their understanding through trial-and-error training sessions conducted over many iterations until they eventually reach equilibrium within their respective roles after having evolved over time into highly competent systems for producing/judging images respectively.
This model has been proven highly successful at creating aesthetically pleasing results across many genres of art including digital paintings, drawings, sculptures, photographs, videos, music, etc.—all without any human involvement. It also gives rise to unique artistic styles that are often difficult for humans to classify as AI-generated, and this speaks to their complexity and depth.