Speaker: Yinyu Ye, Stanford
Title: Multi-Block ADMM and its Convergence
Abstract: We show that the direct extension of alternating direction method of multipliers (ADMM) with three blocks is not necessarily convergent even for solving a square system of linear equations, although its convergence proof was established 40 years ago with one or two blocks. However, we prove that, in each iteration if one randomly and independently permutes the updating order of variable blocks followed by the regular multiplier update, then ADMM will converge in expectation when solving any system of linear equations with any number of blocks. This is probably the first theoretical evidence for applying random permutation in computational optimization, where empirical results have shown the effectiveness of random permutation in either ADMM or block coordinate descent method. We also discuss its extension to solve general convex optimization problems.
Bio: Yinyu Ye is currently the Kwoh-Ting Li Professor in the School of Engineering at the Department of Management Science and Engineering and Institute of Computational and Mathematical Engineering and the Director of the MS&E Industrial Affiliates Program, Stanford University. He received the B.S. degree in System Engineering from the Huazhong University of Science and Technology, China, and the M.S. and Ph.D. degrees in Engineering-Economic Systems and Operations Research from Stanford University. Ye's research interests lie in the areas of optimization, complexity theory, algorithm design and analysis, and applications of mathematical programming, operations research and system engineering. He is also interested in developing optimization software for various real-world applications. Current research topics include Liner Programming Algorithms, Markov Decision Processes, Computational Game/Market Equilibrium, Metric Distance Geometry, Dynamic Resource Allocation, and Stochastic and Robust Decision Making, etc. He is an INFORMS (The Institute for Operations Research and The Management Science) Fellow, and has received several research awards including the winner of the 2014 SIAG/Optimization Prize awarded every three years to the author(s) of the most outstanding paper, the inaugural 2012 ISMP Tseng Lectureship Prize for outstanding contribution to continuous optimization, the 2009 John von Neumann Theory Prize for fundamental sustained contributions to theory in Operations Research and the Management Sciences, the inaugural 2006 Farkas prize on Optimization, and the 2009 IBM Faculty Award. He has supervised numerous doctoral students at Stanford who received received the 2015 and 2013 Second Prize of INFORMS Nicholson Student Paper Competition, the 2013 INFORMS Computing Society Prize, the 2008 Nicholson Prize, and the 2006 and 2010 INFORMS Optimization Prizes for Young Researchers. Ye teaches courses on Optimization, Network and Integer Programming, Semidefinite Programming, etc. He has written extensively on Interior-Point Methods, Approximation Algorithms, Conic Optimization, and their applications; and served as a consultant or technical board member to a variety of industries, including MOSEK.