Videos

New Progress on Stochastic Variance-Reduced Methods in Machine Learning: Adaptive Restart and Distributed Optimization

Presenter
December 12, 2017
Abstract
Many statistical learning problems can be formulated as convex optimization problems with a finite-sum structure. Such a structure has been leveraged by the stochastic variance reduced gradient (SVRG) method to reduce the computational complexity for finding the optimal solution. However, the implementation of SVRG method depends an unknown strong convexity parameter, which is difficult to estimate exactly. To address this issue, we propose an adaptive SVRG method that automatically searches for this unknown parameter on the fly of optimization while obtains almost the same complexity just as when this parameter is known. In addition, when the machine learning problem contains both big data and a large optimization model. We proposed a distributed primal-dual SVRG method that is suitable for asynchronous updates with parameter servers. In particular, we work with the saddle-point formulation of the learning problems which allows simultaneous data and model partitioning. Compared with other first-order distributed algorithms, we show that our method may require less amount of overall computation and communication. Bio Qihang Lin is an assistant professor in Management Science Department in Tippie College of Business in University of Iowa. He studied in Tsinghua University (B.S. in Math 2008 ) in China and obtained his PhD in ACO (Algorithm Combinatorics and Optimization) in 2013 from Tepper School of Business in Carnegie Mellon University. Dr. Lin’s research fields involve: 1) large-scale optimization for machine learning; 2) Stochastic optimization with applications in high-frequence trading and crowdsourcing.