Videos

Interpretability: of what, for whom, why, and how?

Presenter
October 17, 2019
Abstract
Zachary Lipton - Carnegie Mellon University Historically, and particularly in the natural sciences, the models were fit to empirical data it infer the values of theoretically postulated quantities. The interpretation of model here depends on the data itself, the manner in which it was collected, and the relationships that theory predicts we would find. In modern machine learning, researchers have made tremendous strides in developing accurate predictive models. However, these methods are often agnostic to where the data came from, are applied absent any theory on processes they are modeling, and precisely what assumptions they make (what is the inductive bias of deep learning?) remains an open question. Nevertheless, for a variety of reasons, including concerns about the applicability of ML in the real world, a desire to know “how the model thinks”, and desire to derive insights in the natural sciences, a nascent field of interpretable machine learning has sprung up. In this talk, I’ll discuss interpretability methods through the lenses of the discourse, their validity, and empirical findings concerning their utility.