Videos

An empirical look at generalization in neural nets

Presenter
April 22, 2020
Abstract
Tom Goldstein - University of Maryland Generalization in neural nets is a mysterious phenomenon that has been studied by many researchers, but usually from a purely theoretical angle. In this talk, we use empirical tools and visualizations to investigate why generalization is a mystery, how "good" minima in neural loss functions are qualitatively different from "bad" ones, and why optimizers are biased towards "good" minima.