Responsive Portraits in Real-time

As we have seen, neural networks can create great pictures, but the process is still too slow to be applied in real-time or to implement responsive applications in which the user can affect the resulting image while it is being generated.

The Processing framework and programming language shines in such applications. Here I will present a responsive portrait generator, implemented in Processing.

This Processing “sketch”, as programs are called in Processing, is using a webcam to continuously view the view to be painted. At the same time it is painting a picture, in random strokes that take their color from the webcam view. In addition, the sketch recognized if there is a face in the view, and paints the face in finer strokes.

00000037  00000198

The output image is continuously updated on the screen. It will take some time for the sketch to paint a complete picture. Then, if the subject moves, or if the camera is moved, it will again take some time before the complete picture is updated. Therefore, during some time images as if of a broken world will be generated, until finally, if the subject and the camera keep still, a harmonious image will be reached.

00001776 00001814

Here we can see the portrait generator in action. The sketch is running, people come into the webcam view to have their portrait painted. Then this is what happens.

Please activate JavaScript to view this video.
Video-Link: https://www.youtube.com/watch?v=JoW1WS575dQ

And here we can see some images painted by the portrait generator, broken and whole.

Please activate JavaScript to view this video.
Video-Link: https://www.youtube.com/watch?v=_fGUd6bWIr8

Finally, here is the source code of the sketch:

import gab.opencv.*;
import processing.video.*;
import java.awt.*;

Capture video;
OpenCV opencv;
PImage img ;
float x;
float y;
float cx;
float cy;

void setup() {
  size(640, 480);
  img = new PImage(width,height) ;
  video = new Capture(this, 640, 480);
  opencv = new OpenCV(this, 640, 480);
  opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);  
 // Start x and y in the center
  x = width/2;
  y = height/2;
  cx = x ;
  cy = y ;
  video.start();
}

void draw() {
  if (video.available()) {
    video.read() ;
    opencv.loadImage(video);
  }

  pushMatrix() ;
  scale(-1.0,1.0) ;
  popMatrix() ;

  video.loadPixels() ;    

  Rectangle[] faces = opencv.detect();
  
  if (faces.length > 0) {
    cx = faces[0].x + faces[0].width/2 ;
    cy = faces[0].y + faces[0].height/2;
    cx = width - cx ;
    }  
    
  video.loadPixels();
  
  for (int k=0; k<2080; k++) {
    float d = dist(x, y, cx, cy) ;
    float w = constrain(d /30, 2, 32) ;
    float maxlen = constrain(d/30, 4, 32) ;
    
    // Pick a new x and y
    float newx = constrain(x + random(-maxlen,maxlen),0,width-1);
    float newy = constrain(y + random(-maxlen,maxlen),0,height-1);
  
    // Find the midpoint of the line
    int midx = int((newx + x) / 2);
    int midy = int((newy + y) / 2);
  
    // Pick the color from the video, reversing x
    color c = video.pixels[(width-1-midx) + midy*video.width];   
    stroke(c);
    strokeWeight(w);
    line(x,y,newx,newy);
  
    // Save newx, newy in x,y
    x = newx;
    y = newy; 
  }
}

void keyPressed() {
   saveFrame("########.jpg") ; 
}

Leave a Reply

Your email address will not be published. Required fields are marked *