Videos

Nonlinear Independent Component Analysis

Presenter
August 4, 2020
Abstract
Unsupervised learning, in particular learning general nonlinear representations, is one of the deepest problems in machine learning. Estimating latent quantities in a generative model provides a principled framework, and has been successfully used in the linear case, e.g. with independent component analysis (ICA) and sparse coding. However, extending ICA to the nonlinear case has proven to be extremely difficult: A straight-forward extension is unidentifiable, i.e. it is not possible to recover those latent components that actually generated the data. Here, we show that this problem can be solved by using additional information either in the form of temporal structure or an additional observed variable. We start by formulating two generative models in which the data is an arbitrary but invertible nonlinear transformation of time series (components) which are statistically independent of each other. Drawing from the theory of linear ICA, we formulate two distinct classes of temporal structure of the components which enable identification, i.e. recovery of the original independent components. We further generalize the framework to the case where instead of temporal structure, an additional "auxiliary" variable is observed and used by means of conditioning (e.g. audio in addition to video). Our methods are closely related to "self-supervised" methods heuristically proposed in computer vision, and also provide a theoretical foundation for such methods in terms of estimating a latent-variable model. Likewise, we show how variants of deep latent-variable models such as VAE's can be seen as nonlinear ICA, and made identifiable by suitable conditioning.