As my postings in this blog show, I got interested in image processing by neural networks through style transfer, two and half years ago. Style transfer takes two images, a content image and another image to act as a style model, and then recreates the content image using color and texture from the style model. During the last year or so, however, I have mainly experimented with methods in which a neural network creates or transform an image, based on what it has learned in general, without a specific style image. A classic example of this approach is pix2pix which is trained using image pairs.
In my experiments, I tend to use my own photo archive. One particular techinique is based on transforming the images into contours, and then training a model to recreate the image from the mere contours. My goal is not, however, to recreate an exact copy of the original photo, but rather to find artistically meaningful image transforms. Using contours as an intermediate step, somehow resembling the printmaker’s plate, I have been getting results that often remind me of art printmaking.
Initially, the contour plates are directly derived from the photos. Later on, the contours plates can be edited, by deleting or replicated parts of the image, or combining contours from several images, resulting in a collage technique. As the collage is created on the contour level, with the details added by the neural model, the final image usually appears quite seamless.
Extending the process to video, creating a moving art print, is also worth experimenting with.
Adding a random factor into the process can be used to emulate how a real printmaking process results in somewhat different copies of the image.
Variations in the process can also produce different styles, also quite painting-like, without any explicit style model. So far, however, this process is difficult to control, requiring much experimentation when looking for a specific visual effect.
I am working further on this, trying different solutions and training materials. My latest experiment used image pairs like this, with a contour print plate and a processed copy of a photo.
Typically for these experiments, this approach gave good results with only a smaller part of the photo material. For me, that is not a problem, I am not trying to develop a machine which will make anything look great, but, step by step, getting a grip on the process so that I can create some pictures that express my own creative needs. The following examples from the latest run are not yet my own works, though, just materials on the way further. Anyhow, I have given them names so as to assign some meaning to each of them.
“The Rift”
“Waiting”
“Writing on the wall”
“Who are we listening to?”
“What happens out there?”
“On the beach”
“Relaxing after work”