Videos

Model misspecification in reinforcement learning

Presenter
February 24, 2020
Abstract
Csaba Szepesvári - DeepMind & University of Alberta Model misspecification refers to that the assumed model class used in a learning/reasoning algorithm represents an imperfect approximation to reality. An algorithm is robust to model misspecification when its performance degrades gracefully as the model deviates more from reality, a highly desirable property of any algorithm. While model misspecification has been studied in reinforcement learning (RL) since the early days of of the field, it is just recently that some of the special challenges that model misspecification presents in RL have been recognized. In this talk, I will explain the new results, put them into the context of previous research, describe the implications together with intriguing open problems.
Supplementary Materials