Many people appear to have problems getting neural-style to run, and this is often related to the fact that they try to use it with limited memory such as 2GB. My own tests indicate that neural-style, when used with the default VGG19 network, requires at least 3 GB memory. Lowering image_size does not help, as there is a memory peak during initialization which appears to be unrelated to the image_size. My tests have been conducted using CPU, so results may vary when using a GPU, but in practice it still seems getting neural-style with VGG19 to run with only 2GB memory does not work.
There is an alternative to VGG19, nin-imagenet-conv, which runs quite comfortably even under 2GB of memory. You can download the files from https://drive.google.com/folderview?id=0B0IedYUunOQINEFtUi1QNWVhVVU&usp=drive_web . Download nin_imagenet_conv.caffemodel and train_val.prototxt into your models/ folder.
You can then run neural-style as follows (adjust the -gpu parameter if you want to run using GPU):
th neural_style.lua -gpu -1 -print_iter 1 -save_iter 50 -num_iterations 2000 -image_size 512 -output_image nintest.png -model_file models/nin_imagenet_conv.caffemodel -proto_file models/train_val.prototxt -content_layers relu7 -style_layers relu1,relu3,relu5,relu7,relu9
On my CPU, this will only use around 0.8GB reserved memory and produces the following image in 1000 iterations.
Increasing image_size to 960 still runs in under 3GB and produces the following picture in 1000 iterations.
The content and style weights can be changed as usual. Likewise one can experiment with the layers: relu1,relu2,relu3,relu5 (no relu4),relu6,relu7,relu8,relu9,relu10,relu11,relu12. Running the first try again, image_size=512 but content layer relu3, we get in 1200 iterations while using only slightly more than 0.8GB memory.
Many different results are possible by tweaking the parameters. The results will not be identical to those achieved with VGG19 but they should still be interesting and useful.
For more examples using nin-imagenet-conv, see my earlier post Switching to a smaller net