Alivise Memo, Pietro Zanuttigh
Multimedia Tools and Applications, 2017


This paper proposes a novel human-computer interaction system exploiting gesture recognition. It is based on the combined usage of an head-mounted display and a multi-modal sensor setup including also a depth camera. The depth information is used both to seamlessly include augmented reality elements into the real world and as input for a novel gesture-based interface. Reliable gesture recognition is obtained through a real-time algorithm exploiting novel feature descriptors arranged in a multidimensional structure fed to an SVM classifier. The system has been tested with various augmented reality applications including an innovative human-computer interaction scheme where virtual windows can be arranged into the real world observed by the user.


The full paper can be downloaded from here

Sample Video of the system



The video shows an example of the usage of the proposed system


Experimental Results Dataset

The experimental results dataset is shared with our conference publication presenting the gesture recognition subsystem [1]. If you use the dataset please cite [1] and this work [2]. The dataset can be downloaded from . For any information on the data you can contact .



[1]   A. Memo, L. Minto, P. Zanuttigh,  "Exploiting Silhouette Descriptors and Synthetic Data for Hand Gesture Recognition", STAG: Smart Tools & Apps for Graphics, 2015

[2]   A. Memo, P. Zanuttigh, Head-mounted gesture controlled interface for human-computer interaction , accepted for publication on Multimedia Tools and Applications



xhtml/css website layout by Ben Goldman  -