Advertisement

Finally, AI algorithms able to create original art — but also fake audio and video

GAN (generative adversarial networks) are a class of deep neural networks that contest against each other to produce everything from photorealistic images to robotic behavior

Aipainter

By Brian Santo, contributing writer

A brand spanking new category of neural network used for any number of scientific purposes has also been rapidly adopted by artists to perform various types of human mimicry, and the results are at once amazing and disturbing.

Generative adversarial networks (GAN) were first proposed only three years ago by a team of researchers led by Ian Goodfellow, a staff research scientist at Google Brain. The basic idea is to train two artificial intelligences at once, setting one to learn and the other to essentially challenge the first’s results until the first one comes up with a result that seems to the second as if it might be new.

As they describe it in the summary of their paper , the idea is to create “a generative model G that captures the data distribution and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake.”

At first, GANs were used for precisely the type of wonkitude one would expect. For example, one set of researchers is using them to improve the imaging results of advanced telescopes . Another group is using GANs to automatically infer the behavior of animals.

But then artists almost immediately got a hold of GANs.

Computers have been used to generate art for decades, some of it considered compelling as art. Most of these efforts tended to either depend on some measure of human manipulation of the process or directly mimic existing pictures or styles. GANs are different because they can be used to automatically generate images that do not directly compare to anything existing.

The Art and Artificial Intelligence Lab at Rutgers University developed a GAN and trained it to identify, classify, and rank different paintings and styles of paintings. In 2015, that AI made some interesting new comparisons of works from artists from different eras and different schools of painting and came up with rankings that intrigued the art world. Following that, the Rutgers team worked with researchers at the College of Charleston and with Facebook to develop a version of a GAN called a creative adversarial network, or CAN, that synthesizes different painting styles to create a stylistically unique image. The results are at once beautiful and strange.

And somewhat controversial. In recent months, some of the works of art generated by GANs and CANs have been set up next to works generated by humans, and viewers have been asked to guess who (or what) generated each. The viewers are not always correct, reviving questions about what creativity is.

Yet another artist, Mike Tyka, is using GANs to create portraits of people who never existed.

Mike_Tyka_AIArt

Art by Mike Tyka, mytka.gihub.io .

While it might be fun to argue about the nature of creativity — and even the nature of humanity — in the context of art, it is unlikely that computer-generated art will ever dissuade humans from generating art of their own.

Far more troubling is the recent use of GANs to easily create video of events that never happened.

In February, an artist named Mario Klingemann posted on YouTube a video that he generated using a GAN. He fed the GAN video of twenty-something singer Francoise Hardy and an audio clip of Donald Trump advisor Kellyanne Conway, and the result was a clip in which it appears as if Hardy is delivering Conway’s dialog. An extra twist is that the video footage is archival; Hardy is now in her 70s.

Manipulation of video is hardly new and keeps getting more sophisticated — think of the appearance of a remarkably lifelike (but not quite perfect) digital Peter Cushing in “Star Wars: Rogue One.”

In the past, doing something like that has required active use of sophisticated editing tools, however. Klingemann’s GAN ran on a PC and spit out its results automatically.

There are any number of ways that GANs can be used for creating art. That’s not what’s troubling some people. The worrisome thing is that GANs have just made it significantly easier to create fake video.

There’s been a steady, inexorably undermining of the concept of seeing-is-believing over the course of decades. Photoshop was a blow to the inherent veracity of photographs 30 years ago. By democratizing the capability to fake video, the new worry is that generators of truly fake news — not just news you don’t want to agree with, but deliberately false propaganda — will be able to cause greater mischief and outright damage.

Advertisement



Learn more about Electronic Products Magazine

Leave a Reply