Saturday, May 3, 2008

COMPUTER VISION-BASED GESTURE RECOGNITION FOR AN AUGMENTED REALITY INTERFACE

COMPUTER VISION-BASED GESTURE RECOGNITION FOR AN AUGMENTED REALITY INTERFACE

Störring et al present a virtual reality interface using hand gestures captured by a head mounted camera. First they must segment the hand from the background. The images are projected into a chromaticity space so that color can be separated from intensity and other image features. The hands and objects are modeled as chromaticity distributions represented as Gaussians. Each pixel is classified as hand, background, or PHO objects. Objects must fall within a specific size range, and have missing pixels filled in using an opening filter. Next the hand is plotted radially from its center, and the number of protrusions beyond a certain radius is counted to determine the gesture.

Discussion
Projecting the hand on radial coordinates to determine the number of outstretched fingers is rather novel, but the "robustness" to how a gesture is formed is simply attempting to change a drawback into an advantage. They can't tell which fingers are outstretched, only the number, so they say that's a desired quality. Also, they only provide the generic "users adapted quickly" as evidence that the system works well.

Reference
COMPUTER VISION-BASED GESTURE RECOGNITION FOR AN AUGMENTED REALITY INTERFACE Moritz Störring, Thomas B. Moeslund, Yong Liu, and Erik Granum In 4th IASTED International Conference on VISUALIZATION, IMAGING, AND IMAGE PROCESSING, pages 766-771, Marbella, Spain, Sep 2004

No comments: