Monday, 28 February 2011

The Computer Drawing Experiment...

Since implementing the 'Scribbler' for my earlier 'spiders' demo video I've started to think more and more about how I draw.  I wanted to see if I'd be able to write an algorithm which would draw as efficiently as humans do.  Firstly I adapted the scribbler to only draw with one long line and combined this with my optical flow distortion code, again using OpenCV for optical flow analysis and Blender as my drawing interface.  The scribbler would need to know where it had been recently so it didn't draw over those parts again and instead headed off to complete the rest of the image.  Given a selection of random near by points to scribble over to next it would need to decide which one was best based on how dark the point needed to be, how close the point was, and if it had been there before, and how many times.

Anyway this was the first result... a bit busy, not very accurate, and not much economy of line.  Very clearly not drawn by a human, or at least not by a sober one.



Next I decided to pre-process the images in Photoshop with an edge-detect filter.  I was going to write my own fast difference of gaussians (or even simpler a difference between neighbours) but I wanted to check the principle worked first.



Straight away much more economical with the lines, and to some extent more accurate.  Since edges or lines essentially have direction and fills don't (ie. you can shade a fill by moving the pen in any direction you want until you reach a border but you can only run a pen along a line in one of two directions) I could use some code to make the line prefer to not change direction and keep heading straight ahead.  This saves some of the over and back-ness of the free scribbling algorithm but isn't perfect.  My next step is to get the scribbler to intelligently work out which way the edge (or line) that it is currently drawing is curving, and to anticipate where to look next.  I think that when we draw we tend to look ahead a fair bit and queue up drawing instructions so our eyes are always a few centimetres ahead of our pen.  If our eyes get lost and have to search then our pen pauses for a moment.

Next I compared the two computer drawing algorithms with my own human 'blind' rotoscope.  The video is here if you want to see it.

Despite my algorithm's improvements it still seems like it misses important features in the image and doesn't prioritise what to draw.  I think that rather than looking for the darkest areas or the sharpest edges it probably needs to find a few high contrast points in the image and link them via as many different edges as are available.  I then built a homemade gaze tracker with my bike helmet, a webcam, AfterEffects, an OpenOffice Spreadsheet, Python and Blender.




The calibration here isn't perfect but this next video demonstrates an important fact; that the eye makes big jumps across the image and doesn't smoothly follow any lines, unless they are moving.  It also doesn't scan around anywhere near as much as I was expecting.  It avoids the edges of the image - I assume because these can be seen in periphery anyway and the film-maker probably hasn't put anything of interest there.  I'm also quite amazed by how much the moving street scene causes the gaze to get pushed out the right edge of the screen - the scribbler also suffered from that quite a bit on the moving images.  When the gaze goes out of the film window that's usually a blink, not a calibration problem.



Well that's by some stretch the longest post yet and almost makes up for a quiet month on the blog.  Now for some maths...


Wednesday, 16 February 2011

Optical Flow and Rotoscopy

Been feeling pretty ill the last few days. Managed to drag myself to the Broadcast Video Expo yesterday at Earls Court. Would love to go again tomorrow to watch some of Arri's lighting seminars, and the DaVinci Resolve colour grading workshops, but probably won't be feeling good enough yet so it'll be another day at home storyboarding and tidying up odds and ends.



Last week I rotoscoped over the intro to 'Down by Law' with one long continuous virtual line using blender. At college Felipe suggested I could do a bit more with the data and maybe find some way of using the video to distort the drawings. I wrote a python script using the OpenCV bindings for Python 2.7 to analyse the optical flow of the original film footage. I based the script on the lkdemo using CalcOpticalFlowPyrLK. The script dumped out a load of text files containing the feature tracking info.

In blender I wrote a second script which warped my 'drawing' (made up of vertices and edges) on a frame by frame basis using the nearest tracked feature point to drag the line around. Effectively the script puts the line's control points (vertices) into voronoi cells which are shifted by the tracked features (like a virtual earthquake). I added in very simple outlier detection checking consistency with the flow of neighbouring points along the rotoscoped line. I had wanted to convert the point cloud of feature track points to a mesh using delaunay triangulation, and then use the animated mesh to deform the string of vertices, but it would probably have been excessively slow in python!

Anyways thanks to Felipe for pushing me that bit further. Meanwhile progress on the scribbler continues - I've added in three more drawing styles (two of them shown below) and a depth of field control. Next step is to work on edge detection (I've got down on paperware how it should work) and adding a bit of stability to make sure no essential parts of an image get left undrawn on any one frame.

Thursday, 10 February 2011

Rising Unrest at CSM

According to the national student survey, levels of satisfaction at CSM have been falling year on year. The move to Kings Cross seems to have upset most of the students I have spoken to about it. One of the changes which will severely affect future students at the college is a reorganisation of studio technicians. Currently every course has one or more technicians who know the course well, generally have several years experience of working with tutors and students on the course, and have time to teach as well as 'assist'.

The move to Kings Cross will see many of these positions removed, and the level of assistance vastly reduced: technicians will be spread far more thinly and will not have time to do any teaching. Furthermore the well-established relationships between students, tutors and technicians will be destroyed by the reorganisation. The reasons for this are obvious: the college is making drastic cost-cutting measures across the board with little or no thought as to how they will impact on our learning experience.

A few of us on the PgDip course here at CSM decided to take action and wrote the below letter to the dean:

Tuesday, 8 February 2011

Rotoscope Experiment

This morning I was reading a pigeon-English translation of Yuri Norstein's book "Snow on the Grass".  He discusses eye tracking experiments on viewers looking at paintings and sculptures and shows maps of where their eyes fall. I wondered how my eyes would interpret one of my favourite pieces of film - the introduction to Jim Jarmusch's Down by Law. The sequence of shots are beautifully composed but always moving. I could record my eye position by drawing the area of the image under my eye. By slowing the footage down 50 times I was able to get an almost flowing drawn view of the street. My own drawings were invisible to me as I drew - I used python scripts and blender to record my pen's position over the footage and then render back the trace into one long line.



I've since worked out how I can combine optical flow data with this one long virtual line, fingers crossed to create an automatically animated drawing... some more python coding on the agenda for this week!

Here's a video from the start of term I never got around to posting.  I used waylow's great pedro rig.

Thursday, 3 February 2011

Works In Progress: Ice Cream Van and Facial Expressions

Here's an ice cream van I've been modelling and hand texturing for next week's lip-sync exercise which will involve Big Buck Bunny and Sintel in an unsavoury situation... finally a chance to air my views on Buck's character design!  I'll probably be rendering shadeless with some custom 'shaders' (or more accurately post/comp) coded in python, if I manage to implement my algorithms in time (or if my paperware even works).  I messed around animating straight ahead drawn and sculpted (in blender) faces popping from expression to expression.