Visuelle Gestenerkennung by Andy Abgottspon – The Foundry
Andy begins by classifying various types of visual gesture recognition. Cites Xbox Kinect and other tools are proving increasingly popular.
He demos a simple app which tracks the user’s motion after he has configured it to track the position of his face and a couple of colourful gloves.
His implementation ia based on OpenFrameworks, OpenCV. The latter has been ported to Objective-C for use on iPad 2.
A simple algorithm is preferred because poerformance on the portable device is limited. The problem with 2-hand recognition is that the the recognised contours overlap. To deal with this, we need to consider the previous position of the contour. The size of the identifed contour can be used to recognise grab gestures.
Head tracking is supported by OpenCV.
Using colours to track hands is great except in situations where the colour can change. Even a grab action can cast a shadow which effectively changes the colour – potentially breaking the tracking.
Andy (with an assistent) demos a 3D game. This seems to work reasonably well.
He then goes on to describe possible applications. One interesting area is medicine, where touch screens cannot be used for hygene reasons.
Given what I’ve seen, I’m not actually sure there are that many applications of this technology (given it’s current state) beyond gaming, which may in itself be a gimmick and not much more. On mobile devices in particular, the “touch” paragim seems to be sufficient – and in practice far more performant – for most situations.
However, I’m not famous for my imagination, and I’ll probably be proven wrong sooner rather than later.
Still – a great talk, because I feel it has given the audience an opportunity to sense what the future may hold.