FEETICHE: FEET Input for Contactless Hand ...
Type de document :
Communication dans un congrès avec actes
DOI :
Titre :
FEETICHE: FEET Input for Contactless Hand gEsture Interaction
Auteur(s) :
Lopes, Daniel Simões [Auteur]
Relvas, Filipe [Auteur]
Paulo, Soraia Figueiredo [Auteur]
Rekik, Yosra [Auteur]
Laboratoire d'Automatique, de Mécanique et d'Informatique industrielles et Humaines - UMR 8201 [LAMIH]
Grisoni, Laurent [Auteur]
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Jorge, Joaquim [Auteur]
Relvas, Filipe [Auteur]
Paulo, Soraia Figueiredo [Auteur]
Rekik, Yosra [Auteur]
Laboratoire d'Automatique, de Mécanique et d'Informatique industrielles et Humaines - UMR 8201 [LAMIH]
Grisoni, Laurent [Auteur]
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Jorge, Joaquim [Auteur]
Titre de la manifestation scientifique :
17th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry, VRCAI 2019
Ville :
Brisbane QLD
Pays :
Australie
Date de début de la manifestation scientifique :
2019-11-14
Éditeur :
ACM
Mot(s)-clé(s) en anglais :
3D manipulation
Foot interaction
Hand gestures
Heel rotation
Large screens
Selection
Tapping
Foot interaction
Hand gestures
Heel rotation
Large screens
Selection
Tapping
Discipline(s) HAL :
Informatique [cs]
Résumé en anglais : [en]
Foot input has been proposed to support hand gestures in many interactive contexts, however, little attention has been given contactless 3D object manipulation. This is important since many applications, namely sterile ...
Lire la suite >Foot input has been proposed to support hand gestures in many interactive contexts, however, little attention has been given contactless 3D object manipulation. This is important since many applications, namely sterile surgical theaters require contactless operation. However, relying solely on hand gestures makes it difficult to specify precise interactions since hand movements are difficult to segment into command and interaction modes. The unfortunate results range from unintended activations, to noisy interactions and misrecognized commands. In this paper, we present FEETICHE a novel set of multi-modal interactions combining hand and foot input for supporting contactless 3D manipulation tasks, while standing in front of large displays driven by foot tapping and heel rotation. We use depth sensing cameras to capture both hand and feet gestures, and developed a simple yet robust motion capture method to track dominant foot input. Through two experiments, we assess how well foot gestures support mode switching and how this frees the hands to perform accurate manipulation tasks. Results indicate that users effectively rely on foot gestures to improve mode switching and reveal improved accuracy on both rotation and translation tasks.Lire moins >
Lire la suite >Foot input has been proposed to support hand gestures in many interactive contexts, however, little attention has been given contactless 3D object manipulation. This is important since many applications, namely sterile surgical theaters require contactless operation. However, relying solely on hand gestures makes it difficult to specify precise interactions since hand movements are difficult to segment into command and interaction modes. The unfortunate results range from unintended activations, to noisy interactions and misrecognized commands. In this paper, we present FEETICHE a novel set of multi-modal interactions combining hand and foot input for supporting contactless 3D manipulation tasks, while standing in front of large displays driven by foot tapping and heel rotation. We use depth sensing cameras to capture both hand and feet gestures, and developed a simple yet robust motion capture method to track dominant foot input. Through two experiments, we assess how well foot gestures support mode switching and how this frees the hands to perform accurate manipulation tasks. Results indicate that users effectively rely on foot gestures to improve mode switching and reveal improved accuracy on both rotation and translation tasks.Lire moins >
Langue :
Anglais
Comité de lecture :
Oui
Audience :
Internationale
Vulgarisation :
Non
Collections :
Source :