Videos

Overcoming gradient pathologies in constrained neural networks

Presenter
October 29, 2019
Abstract
Paris Perdikaris - University of Pennsylvania The widespread use of neural networks across different scientific domains often involves constraining them to satisfy certain symmetries, conservation laws, or other domain knowledge. Such constraints are often imposed as soft penalties during model training and effectively act as domain-specific regularizers of the empirical risk loss. Physics-informed neural networks is an example of this philosophy in which the outputs of dense neural networks are constrained to approximately satisfy a given set of partial differential equations. In this work we identify and analyze a fundamental mode of failure of such approaches that is related to an imbalance in the magnitude of the back-propagated gradients during model training. To address this limitation we propose a re-weighting procedure that resembles the effect of an adaptive learning rate for balancing the interplay between different terms in composite loss functions. We also propose a novel neural network architecture that is more resilient to such gradient pathologies. Taken together, our developments provide new insights into the training of constrained neural networks and consistently improve the predictive accuracy of physics-informed neural networks by a factor of 50-100x across a range of problems in computational physics.
Supplementary Materials