For the July 2010 issue, something rather cool: face detection. How does a program (such as that one in your point-and-click camera) work out from an image whether there is a face in it, and if so where it is?
The algorithm most used is the Viola-Jones classifier algorithm. Basically the way it works is not to try and identify a face in the image, but instead to analyze the image for certain rectangular “features” – no, not facial features, something altogether more basic than that – and then classify those features. But what features should they look for? In essence they analyzed thousands of faces culled from images on the internet to identify certain large scale characteristics that appeared in all/most of those facial images. They then designed (and trained – it was a neural network) a cascading classifier such that if an image had features A or B then it probably was a face, otherwise it certainly wasn’t. Next up were features C, D, E, etc. If the image failed at all of these, it wasn’t a face. And so on, so forth.
The eventual neural network was fast enough and small enough to fit in a camera’s ROM, leading to those clever cameras that can spot a face in a scene and focus on it. Now they even can tell if the face detected is smiling… Damned clever, what?.
This article first appeared in issue 296, July 2010. It was also published online on the TechRadar site.
You can read the PDF here.
(I write a monthly column for PCPlus, a computer news-views-n-reviews magazine in the UK (actually there are 13 issues a year — there's an Xmas issue as well — so it's a bit more than monthly). The column is called Theory Workshop and appears in the Make It section of the magazine. When I signed up, my editor and the magazine were gracious enough to allow me to reprint the articles here after say a year or so.)