Videos

Interpretable Deep Learning Models for Forecasting

Presenter
February 22, 2018
Abstract
Deep learning models have demonstrated strong potential in forecasting problems. However, deep neural networks are usually treated as black boxes and therefore less preferred in many applications where interpretation is needed. In this talk, I will present a novel framework, namely Neural Interaction Detector (NID), that identifies meaningful interactions of arbitrary-order without an exhaustive search on an exponential solution space of interaction candidates. It examines the weights of a deep neural network to interpret the statistical interactions it captures. The key observation is that any input features that interact with each other must follow strongly weighted paths to a common hidden unit before the final output. Empirical evaluation on both synthetic and real-world data showed the effectiveness of NID, which detects interactions more accurately and efficiently than does the state-of-the-art.