A.I. Painter Emulates Great Artists

DeepArt UG
View Caption +
A few weeks back, researchers with Google's artificial neural networks team issued a blog post about its A.I. system, Deep Dream, that could see pictures in clouds and even (arguably) create original art.


View Related Gallery »

Gallery

Putting the Art in Artificial Intelligence: Photos

A few weeks back, researchers with Google's artificial neural networks team issued a blog post about its A.I. system, Deep Dream, that could see pictures in clouds and even (arguably) create original art. Yesterday, a team of four engineering students at Hack Reactor announced via Popular Science, that they were coming out with an app called Dreamify that used Deep Dream's source code to create psychedelic art out of ordinary images. It gets technical, and a little existential, but the basic gist is that by running an image recognition processes in reverse, an image recognition system was able to generate original images rather than just identify them. After training the system with thousands of images of a particular object -- a starfish, say -- the team discovered that the neural network would identify "starfishy" elements in other, unrelated images. The results are trippy, to say the least. But Deep Dream is not the first computer to generate art. We take a look at it here, along with some other examples of machine-generated art. See the Dreams of an Artificial Brain: Photos

Google Research Blog

View Caption +

Deep Dream can generate surprisingly compelling images, depending on what parameters are established when it first begins to process a picture. Each layer in a neural network builds on the ones beneath it, so running an image through lower layers tends to generate lines and simple patterns. In the higher-level layers, however, the network is looking for more sophisticated features and will tend to generate complex images and entire objects. When the Google team had Deep Dream process an image of a cloudy sky, it began creating images of fantastic hybrid animals like the "pig-snail" and the "camel-bird." Google's name for the process? "Inceptionism." In the image above, a neural network programmed to distinguish architectural and animal elements was cut loose on a landscape. The resulting output is therefore not based on any sample image -- it's purely a result of the A.I.'s "thoughts" on the issue. Photo First: Light Captured as Both Particle and Wave

Google Research Blog

View Caption + #3: Google has since published the

DeepDream source code, putting A.I. artistry in the hands of the people. Almost immediately after the code was made public, enterprising engineers and hobbyists began creating tools to explore the possibilities of Google's neural network. Dreamscope is one of several Web apps that has popped up in recent days, and it looks like Instagram on powerful alkaloids. While Dreamscope doesn't give access to the full spectrum of Deep Dream's abilities, it does make the process quick and easy. Just upload an image, select one of the 19 provided filters, and you'll get your own A.I. art show within about 15 seconds. (The first wave of "user-friendly" Deep Dream tools took hours or even days to process an image.) Above is one of the world's most famous public domain images -- the 1970 meeting between Richard Nixon and Elvis Presley -- as run through Dreamscope's "demonic" filter. Captures the moment nicely, doesn't it? How Face Recognition Tech Will Change Everything

Dreamscope

View Caption +

The imagery Deep Dream produces is unique in terms of how it's produced, but machine-generated art -- sometimes called digital art or generative art -- has actually been around for quite a while. Probably the most familiar example is fractal art, in which dedicated software turns algorithmic equations into still images and animations. Fractals are natural phenomena which occur both in mathematics and biology. In a fractal, recursive patterns repeat at different scales -- so that a tiny sliver of a fern leaf will look much the same as the larger fern leaf itself. These repeating geometric patterns can be plotted mathematically, in two or three dimensions, then converted into lines, shapes and colors. The resulting images are virtually infinite in variety and complexity, depending on how you tweak each iteration of a fractal.

ThinkStock

View Caption +

Machine-generated art has been exhibited in galleries all over the world since at least the 1960s. But artists and historians have historically disagreed over whether such exhibits are truly created by computers, or whether computers are simply another tool used by the human artist. Another open question: Can you even term a machine-generated image or object as "Art"? British computer scientist Simon Colton has been exploring these questions with his A.I. project known as The Painting Fool . The A.I. system, adapted for exhibition in galleries, takes a digital picture of each visitor then selects from thousands of abstract templates and image filters. The Painting Fool makes its choices depending upon processes that govern the machine's "mood" -- for instance, scanning text from a newspaper. If its mood is dark enough, it might not paint at all. The Painting Fool also learns from its mistakes and Colton is continually adjusting the A.I.'s algorithms to meet his seven criteria for true creativity: skill, appreciation, imagination, learning, intentionality, reflection and invention. The program has recently branched out to start producing sculptures, animations and poetry. Did da Vinci Create a 3-D 'Mona Lisa'?

The Painting Fool

View Caption +

But there's still that sticky question about Art, with a capital A. Even if a machine does generate original images -- or objects or manuscripts -- do these creations truly constitute artistic expression? The software system known as AARON , for instance, has been creating original artistic images since 1973. Developed by painter and computer scientist Harold Cohen, the program has gone through different stylistic periods in which it has created both highly abstract and highly representative images. AARON's drawings are created though a system of custom printing machines and have been exhibited at the Tate Gallery in London. But while Cohen describes AARON as an A.I., he has officially left the issue of Art as an open question. In an effort to resolve the issue, computer researcher Mark Riedl recently proposed a new variation on the Turing Test, designed to identify true artificial intelligence. His Lovelace 2.0 test would require that an A.I. produce a range of creative work -- paintings, poems, designs -- that expert observers would find indistinguishable from the work of a human artist. Riedl's contention: If a machine can create art that is indistinguishable from human art, then the A.I. has achieved human-level intelligence. How Real-Life A.I. Rivals 'Ex Machina'

Computer History Museum

Artificial intelligence seems to be exploring its artistic side of late, and the results are getting pretty sophisticated. To wit: A new online image processing program uses cutting-edge deep-learning algorithms to create digital paintings in the style of famous painters.

Actually, the A.I. can create new images from uploaded photos that emulate virtually any style or artist — and it’s already online and free to use. More on that in a minute.

Developed by researchers with the EPFLand other European universities, DeepArt appears to function like a standard photo filter. You upload a photo, choose a style, and get back a new image. But the technology underneath the hood is fundamentally different.

Similar to Google’s Deep Dream A.I. system, which made quite the splashlast summer, DeepArt uses advanced neural networking technology to create original art when provided with two or more source images.

Sort of like image recognition in reverse — or maybe sideways — the A.I. looks for patterns in Image A then replicates the patterns within the visual data of Image B. By repeatedly comparing features within the two images, back and forth, the system creates an original third image. The process takes about 10 minutes of computation time on average.

“The neural network that we’re using was initially made for object recognition,” says DeepArt co-founder Lukasz Kidzinski in the demo video. “The network tries to extract very simple features from the photo, like small shapes. The next step is to try to generate a new picture which has similar representation of style that you provided. It tries to a picture that is closest to the representation of the style and the content.”

The images on display at the DeepArt site are genuinely impressive, and the submission process is dead simple. Upload a personal photo, plus a famous painting, and DeepArt will email you the new image.

Alas, art and commerce soon collide, as they have throughout history. When I uploaded a portrait photo of my second grader, plus a digital copy of Monet’s famous Impression, Sunrise, I was looking forward to the results. But the system told me that, due to demand, wait time for the return image would be about 10 hours.

However, for the low, low price of 1.99 Euros, I could get the image in 15 minutes! The company also offers some additional pricing options for posters and gallery prints.

Nothing wrong with that, of course. DeepArt is an overtly commercial endeavor and, in fact, is just one of several A.I. image processing services to pop up in the wake of Google’s open source Deep Dream initiative last summer. But those generated images really are pretty great. Check it out for yourself at DeepArt.io.