Geometric Deep Neural Network using Rigid ...
Document type :
Autre communication scientifique (congrès sans actes - poster - séminaire...): Communication dans un congrès avec actes
Permalink :
Title :
Geometric Deep Neural Network using Rigid and Non-Rigid Transformations for Human Action Recognition
Author(s) :
Friji, Rasha [Auteur]
Centre de Recherche Réseau Image SysTème Architecture et MuLtimédia [CRISTAL]
Drira, Hassen [Auteur]
Centre for Digital Systems [CERI SN]
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Chaieb, Faten [Auteur]
École d'ingénieur généraliste en informatique et technologies du numérique [EFREI]
Kchok, Hamza [Auteur]
Institut National des Sciences Appliquées et de Technologie [Tunis] [INSAT]
Kurtek, Sebastian [Auteur]
Centre de Recherche Réseau Image SysTème Architecture et MuLtimédia [CRISTAL]
Drira, Hassen [Auteur]
Centre for Digital Systems [CERI SN]
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL]
Chaieb, Faten [Auteur]
École d'ingénieur généraliste en informatique et technologies du numérique [EFREI]
Kchok, Hamza [Auteur]
Institut National des Sciences Appliquées et de Technologie [Tunis] [INSAT]
Kurtek, Sebastian [Auteur]
Conference title :
International Conference in Computer Vision
City :
Visio
Country :
France
Start date of the conference :
2021-10-11
HAL domain(s) :
Sciences de l'Homme et Société/Sciences de l'information et de la communication
Informatique [cs]/Vision par ordinateur et reconnaissance de formes [cs.CV]
Informatique [cs]/Vision par ordinateur et reconnaissance de formes [cs.CV]
English abstract : [en]
Deep Learning architectures, albeit successful in most computer vision tasks, were designed for data with an underlying Euclidean structure, which is not usually fulfilled since pre-processed data may lie on a non-linear ...
Show more >Deep Learning architectures, albeit successful in most computer vision tasks, were designed for data with an underlying Euclidean structure, which is not usually fulfilled since pre-processed data may lie on a non-linear space. In this paper, we propose a geometry aware deep learning approach using rigid and non rigid transformation optimization for skeleton-based action recognition. Skeleton sequences are first modeled as trajectories on Kendall's shape space and then mapped to the linear tangent space. The resulting structured data are then fed to a deep learning architecture, which includes a layer that optimizes over rigid and non rigid transformations of the 3D skeletons, followed by a CNN-LSTM network. The assessment on two large scale skeleton datasets, namely NTU-RGB+D and NTU-RGB+D 120, has proven that the proposed approach outperforms existing geometric deep learning methods and exceeds recently published approaches with respect to the majority of configurations.Show less >
Show more >Deep Learning architectures, albeit successful in most computer vision tasks, were designed for data with an underlying Euclidean structure, which is not usually fulfilled since pre-processed data may lie on a non-linear space. In this paper, we propose a geometry aware deep learning approach using rigid and non rigid transformation optimization for skeleton-based action recognition. Skeleton sequences are first modeled as trajectories on Kendall's shape space and then mapped to the linear tangent space. The resulting structured data are then fed to a deep learning architecture, which includes a layer that optimizes over rigid and non rigid transformations of the 3D skeletons, followed by a CNN-LSTM network. The assessment on two large scale skeleton datasets, namely NTU-RGB+D and NTU-RGB+D 120, has proven that the proposed approach outperforms existing geometric deep learning methods and exceeds recently published approaches with respect to the majority of configurations.Show less >
Language :
Anglais
Peer reviewed article :
Oui
Audience :
Internationale
Popular science :
Non
Collections :
Source :
Submission date :
2021-12-08T02:01:07Z
Files
- https://halshs.archives-ouvertes.fr/halshs-03450533/document
- Open access
- Access the document