Skip to content Skip to navigation

CME 510: Linear Algebra and Optimization Seminar

WINTER 2018 SCHEDULE

CME 510 is held on Thursdays at 4:30-5:30 PM in Y2E2-101.
 
April 5: Michael Saunders, MS&E and ICME, Stanford University
 
Title: Algorithm NCL for constrained optimization
 
Abstract: Standard optimization solvers have difficulty if too many constraints are essentially active at a solution.  For example, problems of the form min f(x) st c(x) >= 0 (m constraints and n variables) may have more than n constraints active at a solution.  Such problems arise in the modeling of tax policy (with perhaps millions of constraints and thousands of variables).
 
Algorithm NCL solves a sequence of about 10 subproblems with regularized constraints c(x) + r >= 0 and an augmented Lagrangian objective f(x) - y'r + 1/2 rho r'r that drives the extra variables r to zero.  Interior methods are able to warm start each subproblem. Assuming second derivatives are available, NCL expands the use of interior methods for large-scale optimization.
 
Joint work with Ding Ma, Kenneth Judd, and Dominique Orban.
 
April 12: Qiaochu Yuan, UC Berkeley
 
Title: A tour of the Lanczos algorithm and its convergence guarantees through the decades
 
Abstract: We review the Lanczos algorithm and the development of its variants (block Lanczos, randomized block Lanczos) from the 1950s until the present day.  We focus on theoretical convergence results for each of these algorithms that provide bounds on the approximation accuracy. We conclude by presenting new results that provide a convergence bound for randomized block Lanczos with arbitrary block sizes.
 
April 19: Esmond Ng, LBNL 
 
Title: Revisiting an elimination game
 
Abstract: We consider a game on an undirected graph, in which vertices and edges are eliminated.  When a vertex is eliminated, the incident edges are removed, but new edges are added to the graph according to a prescribed rule.  The added edges will eventually be removed, and the graph will be empty at the end of the game.  We are interested in the total number of new edges that the game sees throughout the elimination.
 
The order in which the vertices are eliminated affects the number of added edges.  Thus, we are interested in finding an elimination order that minimizes a function that depends on the number of added edges.
 
We provide an overview of the elimination game and revisit several strategies.  The game is relevant to the problem of computing LU factors of a sparse matrix.
 
April 26: Austin Benson, Cornell University
 
Title: Simplicial closure and higher-order link prediction
 
Abstract: Networks are a fundamental abstraction of complex systems throughout the sciences and are typically represented by a graph consisting of nodes and edges.  However, many systems have important higher-order interactions, where groups of nodes interact simultaneously, and such higher-order relations are not captured by a graph containing only pairwise connections.  While there are certainly mathematical formalisms for expressing higher-order interactions (such as hypergraphs and simplicial complexes), we lack a systematic way to evaluate higher-order network models.  Here, we evaluate higher-order network models through a new prediction problem for the temporal evolution of higher-order structure, which we call "higher-order link prediction".  The central idea of this task is to predict which new sets of node will interact in the future given the history of the system.  We develop a number of algorithms for higher-order link prediction and use them to predict group interactions in systems from biomedicine, social media, human collaboration, and communications. These algorithms use several large-scale sparse matrix computations that bring a number of scalability challenges.
 
May 3: Osni Marques  Computational Research Division, Scalable Solvers Group, Lawrence Berkeley National Laboratory
 
Title: Solving Nonlinear Eigenvalue Problems with Pade Approximations
 
Abstract: We discuss a strategy for finding nontrivial solutions (lambda,x) for a class of nonlinear eigenvalue problems of the form F(lambda)x = 0. We use rational functions to approximate the nonlinear terms of the problem, in tandem with Pade approximants.  This approximation allows a linearization of the original problem, and its mapping into a generalized eigenvalue of larger dimension.  The dimension of the larger problem can be controlled by exploring low-rank properties of the operators involved.
 
To illustrate, we consider problems related to the modeling of waveguide-loaded accelerator cavities through a finite element discretization of Maxwell's equation.  We discuss the impact of the
degree of the Pade approximants in the linearization process and convergence, and alternatives for solving the resulting linearized problem.
 
Joint work with Y. Zhou and Z. Bai.
 
May 10: Inderjit Dillon, UT Austin
 
Title: Stabilizing Gradients for Deep Neural Networks


Abstract: Vanishing and exploding gradients are two main obstacles in training deep neural networks, especially when trying to capture long-range dependencies in recurrent neural networks (RNNs). I will present an efficient parametrization of the transition matrix of an RNN that stabilizes the gradients that arise in its training. Specifically, we parameterize the transition matrix by its singular value decomposition (SVD), which allows us to explicitly track and control its singular values. We attain efficiency by using tools that are common in numerical linear algebra, namely Householder reflectors for representing the orthogonal matrices that arise in the SVD. We motivate and present results on the Inline Search Suggestions (ISS) application at Amazon Search.

Bio: Among other accomplishments, Prof Dhillon has become ACM Fellow, IEEE Fellow, SIAM Fellow, and AAAS Fellow, and has won the SIAM Linear Algebra Prize.

 
May 17: Zhaojun Bai, UC Davis
 
Title: Nonlinear Eigenvalue Problems with Eigenvector Nonlinearity
 
Abstract: Nonlinear eigenvalue problems with eigenvector nonlinearity (NEPv) arise in electronic structure calculations and robust data clustering among others.  The NEPv is a much less explored topic compared to nonlinear eigenvalue problems with eigenvalue nonlinearity.  From a linear algebra point of view, I will start this talk with recent work on the existence and uniqueness of NEPv, and then present local and global convergence analysis of the widely used self-consistent field iteration for solving NEPv.  In addition, I will discuss a number of practical issues for solving NEPv, such as extracting many eigenpairs.
 
May 24: Huda Nassar, CS, Purdue
 
Title: Low-rank methods for network alignment 
 
Abstract: Network alignment is the problem of finding a large subgraph between two graphs, and more generally k graphs.  The problem is NP-hard and some algorithms have been devised to approach it via solving some form of a relaxed version of the NP-hard problem or by defining certain heuristics measures.  These algorithms normally work well when there is some form of prior known similarity between the nodes of the graphs to be aligned.  The absence of such information makes the problem much harder.  In this scenario, the algorithms would often require much more time to finish executing, and even fail sometimes.
 
I will give a brief overview of the network alignment problem and focus on the case when the similarity measure is absent.  I will discuss two recent algorithms that relate to pairwise network alignment as well as multiple network alignment when no prior similarity is provided.  These algorithms exploit a low-rank structure in the network alignment problem and thus perform much faster than classic state of the art network alignment algorithms.
 
Bio: Huda Nassar obtained her Bachelor's degree in both Mathematics and Computer Science at the American University of Beirut.  She is a fourth year CS PhD student at Purdue University with advisor David Gleich (from ICME).  Her research is on numerical linear algebra and network analysis, including network alignment and link predictions. Huda is a Julia enthusiast and she presented a Julia for Data Science workshop at WiDS Purdue in March of this year.
 
May 31: Dominique Orban, Ecole Polytechnique, Montreal
 
Title: Iterative Methods with an Error Minimization Property
 
Abstract: Motivated by a seismic inversion PDE-constrained optimization problem and an exact penalty approach for equality-constrained optimization, we develop an iterative method named LSLQ for solving linear least-squares problems of any shape and a companion method named LNLQ for least-norm problems.  Both methods are based on the Golub-Kahan process, where the dominant cost is products with A and its transpose, but A is not required explicitly.  We provide cheaply computable lower and upper
bounds on the error in the Euclidean norm along the iterations.  In both methods, the current estimate's error norm decreases monotonically. Each upper bound translates to an upper bound on the error norm along related conjugate-gradient iterations, and provides an error-based stopping criterion.  We report numerical experiments on standard test problems and on the two applications mentioned above.
 
Joint work with Ron Estrin and Michael Saunders, ICME.
 
Bio: Dominique Orban is Associate Professor of computational mathematics at Ecole Polytechnique and a member of the GERAD Group for Research in Decision Analysis in Montreal, Canada.  He obtained his degrees from Namur (Belgium) and Toulouse (France) in continuous optimization and has a keen interest in the linear algebra problems occurring at the core of optimization methods.  He has authored over 40 papers and has contributed to several software packages on optimization and linear algebra, including the increasingly CUTE family.  Dominique is a member of SIAM, the Mathematical Optimization Society, and the Association for Computing Machinery.