Videos

Understanding, Interpreting and Design Neural Network Models Through Tensor Representations

Presenter
May 18, 2021
Abstract
Furong Huang - University of Maryland Modern deep neural networks have found tremendous empirical success in a wide variety of data science applications. Spectral methods that go beyond matrices tackle the problem of non-convexity in classic ML problems such as learning latent variable models and provide provable performance guarantees using tensor decompositions. Our objective is to advance spectral methods to be adaptable to deep neural networks with guaranteed "nice" properties. We design deep neural network architectures that guarantee interpretability, expressive power, generalization, and robustness even before the start of the training process. From a function approximation perspective, most previous methods train a deep network that might have "undesirable" properties and then project the "undesirable" model to the manifold of "desirable" models. We will use spectral methods to design a "desirable" class of deep model functions and guarantee a "desirable" deep model after any training process.
Supplementary Materials