i-RevNet: Deep Invertible Networks
Type de document :
Communication dans un congrès avec actes
Titre :
i-RevNet: Deep Invertible Networks
Auteur(s) :
Jacobsen, Jörn-Henrik [Auteur]
Instituut voor Informatica [IvI]
Smeulders, Arnold [Auteur]
Instituut voor Informatica [IvI]
Oyallon, Edouard [Auteur]
Centre de vision numérique [CVN]
Organ Modeling through Extraction, Representation and Understanding of Medical Image Content [GALEN]
Sequential Learning [SEQUEL]
Département d'informatique - ENS-PSL [DI-ENS]
Organ Modeling through Extraction, Representation and Understanding of Medical Image Content [GALEN-POST]
Instituut voor Informatica [IvI]
Smeulders, Arnold [Auteur]
Instituut voor Informatica [IvI]
Oyallon, Edouard [Auteur]
Centre de vision numérique [CVN]
Organ Modeling through Extraction, Representation and Understanding of Medical Image Content [GALEN]
Sequential Learning [SEQUEL]
Département d'informatique - ENS-PSL [DI-ENS]
Organ Modeling through Extraction, Representation and Understanding of Medical Image Content [GALEN-POST]
Titre de la manifestation scientifique :
ICLR 2018 - International Conference on Learning Representations
Ville :
Vancouver
Pays :
Canada
Date de début de la manifestation scientifique :
2018-04-30
Date de publication :
2018-04
Mot(s)-clé(s) en anglais :
analyzing CNNs
deep learning
invertible CNNs
deep learning
invertible CNNs
Discipline(s) HAL :
Informatique [cs]/Intelligence artificielle [cs.AI]
Statistiques [stat]/Machine Learning [stat.ML]
Statistiques [stat]/Machine Learning [stat.ML]
Résumé en anglais : [en]
It is widely believed that the success of deep convolutional networks is based on progressively discarding uninformative variability about the input with respect to the problem at hand. This is supported empirically by the ...
Lire la suite >It is widely believed that the success of deep convolutional networks is based on progressively discarding uninformative variability about the input with respect to the problem at hand. This is supported empirically by the difficulty of recovering images from their hidden representations, in most commonly used network architectures. In this paper we show via a one-to-one mapping that this loss of information is not a necessary condition to learn representations that generalize well on complicated problems, such as ImageNet. Via a cascade of homeomorphic layers, we build the i-RevNet, a network that can be fully inverted up to the final projection onto the classes, i.e. no information is discarded. Building an invertible architecture is difficult, for one, because the local inversion is ill-conditioned, we overcome this by providing an explicit inverse. An analysis of i-RevNets learned representations suggests an alternative explanation for the success of deep networks by a progressive contraction and linear separation with depth. To shed light on the nature of the model learned by the i-RevNet we reconstruct linear interpolations between natural image representations.Lire moins >
Lire la suite >It is widely believed that the success of deep convolutional networks is based on progressively discarding uninformative variability about the input with respect to the problem at hand. This is supported empirically by the difficulty of recovering images from their hidden representations, in most commonly used network architectures. In this paper we show via a one-to-one mapping that this loss of information is not a necessary condition to learn representations that generalize well on complicated problems, such as ImageNet. Via a cascade of homeomorphic layers, we build the i-RevNet, a network that can be fully inverted up to the final projection onto the classes, i.e. no information is discarded. Building an invertible architecture is difficult, for one, because the local inversion is ill-conditioned, we overcome this by providing an explicit inverse. An analysis of i-RevNets learned representations suggests an alternative explanation for the success of deep networks by a progressive contraction and linear separation with depth. To shed light on the nature of the model learned by the i-RevNet we reconstruct linear interpolations between natural image representations.Lire moins >
Langue :
Anglais
Comité de lecture :
Oui
Audience :
Internationale
Vulgarisation :
Non
Collections :
Source :
Fichiers
- https://hal.archives-ouvertes.fr/hal-01712808/document
- Accès libre
- Accéder au document
- http://arxiv.org/pdf/1802.07088
- Accès libre
- Accéder au document
- https://hal.archives-ouvertes.fr/hal-01712808/document
- Accès libre
- Accéder au document
- https://hal.archives-ouvertes.fr/hal-01712808/document
- Accès libre
- Accéder au document
- document
- Accès libre
- Accéder au document
- iclr2018_conference_final_submitted.pdf
- Accès libre
- Accéder au document
- 1802.07088
- Accès libre
- Accéder au document
- document
- Accès libre
- Accéder au document
- iclr2018_conference_final_submitted.pdf
- Accès libre
- Accéder au document