The core of neural-style is the pre-trained neural network. Pre-trained means that the net has been trained using a large number of images, usually with the goal that the net be able to recognize objects and features in images. The default neural network for neural-style, VGG-19 is very effective for such purposes. In addition, when used with neural-style, it is relatively easy to produce high-quality images. The only drawback is that it consumes a lot of memory. With 8 GB of memory one cannot get resolutions beyond 800 pixels.
It is now possible to switch to using another neural network in neural-style. I have tried nin-imagenet-conv, and found that I can get 1280px without running over 4 GB. The drawback is that with the default settings, the results might not be what you would expect having used VGG-19. I almost gave up myself, but then suddenly started making progress with nin-imagenet-conv. It is different, but not inferior. It may not be the best choice for copying someone’s style, but it is a great creative tool when you are looking for your own style.
In this image, I have used neural-style together with nin-imagenet-conv to convert a webcam view from Helsinki using style from a painting by Lionel Feininger.
As always, I am not merely interested in the final results of neural-style after several hundreds of iterations. I like to look at the intermediate images, to see the beauty in them, and also to use them in animations. Like in the following video clip, where the Helsinki view gradually emerges from the abstractness of the initial image.
Next, I’ll give an example of how I work using neural-style and other tools to work on original images. Let us take a photo from Finland late last autumn.
For the style image, I select an image which I have myself created, using a Processing script and webcam together with some acting in front of the camera.
Applying this style to the photo, with suitable parameters, gives me this:
Now, I might like it, the stripes in particular, but want to try with some color. I could use the original photo as a style image, but this time I’ll use another derivative of the same photo to add color. I had created this image earlier using a Processing script, webcam, some photos and neural-style.
The result, combining the grey stripes and the stylized autumn photo, currently looks like this. It may not be what I was looking for, and if not, then I can go on like this. At this point, I have used several photos, Processing scripts and neural-style runs. At each phase, I can choose the style image and in addition I have control of many parameters.
Furthermore I have the freedom to watch neural-style producing intermediate results, so if I happen to like a rougher version from an early iteration, I can use it instead of the more polished results after 1000 to 2000 iterations. Here, for example, I have used same content and style images, with different weight parameters and only 50 iterations.