Art printmaking, but neurally?

As my postings in this blog show, I got interested in image processing by neural networks through style transfer, two and half years ago. Style transfer takes two images, a content image and another image to act as a style model, and then recreates the content image using color and texture from the style model. During the last year or so, however, I have mainly experimented with methods in which a neural network creates or transform an image, based on what it has learned in general, without a specific style image. A classic example of this approach is pix2pix which is trained using image pairs.


In my experiments, I tend to use my own photo archive. One particular techinique is based on transforming the images into contours, and then training a model to recreate the image from the mere contours. My goal is not, however, to recreate an exact copy of the original photo, but rather to find artistically meaningful image transforms. Using contours as an intermediate step, somehow resembling the printmaker’s plate, I have been getting results that often remind me of art printmaking.

Kuva287_fake_B st2007-141_fake_B epoch119_fake_B_random  20170529_092931_fake_B

Initially, the contour plates are directly derived from the photos. Later on, the contours plates can be edited, by deleting or replicated parts of the image, or combining contours from several images, resulting in a collage technique. As the collage is created on the contour level, with the details added by the neural model, the final image usually appears quite seamless.

full-20170605-155405 favela bestversion

Extending the process to video, creating a moving art print, is also worth experimenting with.

Adding a random factor into the process can be used to emulate how a real printmaking process results in somewhat different copies of the image.

input image009_random sample16 input image009_random sample09 input image009_random sample07 input image009_random sample05 input image009_random sample01 input image009_random sample11

Variations in the process can also produce different styles, also quite painting-like, without any explicit style model. So far, however, this process is difficult to control, requiring much experimentation when looking for a specific visual effect.

epoch133_fake_B_encoded input image3853_encoded
epoch037_fake_B input image046_random sample03
input image881_encoded epoch056_fake_B_random
input image1201_random sample10 input image1540_encoded

I am working further on this, trying different solutions and training materials. My latest experiment used image pairs like this, with a contour print plate and a processed copy of a photo.


Typically for these experiments, this approach gave good results with only a smaller part of the photo material. For me, that is not a problem, I am not trying to develop a machine which will make anything look great, but, step by step, getting a grip on the process so that I can create some pictures that express my own creative needs. The following examples from the latest run are not yet my own works, though, just materials on the way further. Anyhow, I have given them names so as to assign some meaning to each of them.


“The Rift”




“Writing on the wall”


“Who are we listening to?”


“What happens out there?”


“On the beach”


“Relaxing after work”

Comments are closed