Programmer Peter Baylies came up with an improved version of a generative adversarial network (GAN) StyleGAN. Called StyleGAN2, it is now able to generate pictures of its own either mimicking or mixing the styles of famous artists.

Peter Baylies started his experiment late last year by feeding his upgraded model with WikiArt paintings dataset (which currently has a collection of more than 80,000 fine-art paintings from more than 1,000 artists, ranging from fifteen century to modern times) and training it to distinguish between individual artistic styles. By now, StyleGAN2 is capable of producing auratic images speaking for themselves.

The user is free to set any combination of artistic styles he wants and join Peter Baylies in his experiment.

Picture by Peter Baylies from archive.org

Invented by Ian Goodfellow in 2014, a generative adversarial network is composed of two distinct networks: a generator and a discriminator. Generators take a random vector as input and produce an output datapoint — in this case, an image — that attempts to mimic data points from a given dataset. Discriminators take these images as input, and determine whether they are from a given dataset or are created by the generator. When trained on a significant amount of unlabeled data via unsupervised learning, GANs can learn the underlying patterns and structures and generate entirely new images that could be in the given dataset.

StyleGAN was recently developed by Nvidia. Last year, the company released it as an open-source tool. StyleGAN gradually generates artificial images from very low to higher resolutions through progressive layer growing and modifies the input of each level separately, which allows improvements in different image attributes (from coarse features to finer details) without affecting other levels.

Follow us on FacebookTwitterTelegram, and Youtube.