- About Us
- News & Events
- ICME Partners
Random matrix theory has long been used to study the spectral properties of physical systems, and has led to a rich interplay between probability theory and physics . Historically, random matrices have been used to model physical systems with random fluctuations, or systems whose eigenproblems were too difficult to solve numerically. This talk explores applications of RMT to the physics of disorder in organic semiconductors [2,3]. Revisiting the old problem of Anderson localization  has shed new light on the emerging field of free probability theory . I will discuss the implications of free probabilistic ideas for finite-dimensional random matrices , as well as some hypotheses about eigenvector locality. Algorithms are available in the RandomMatrices.jl package  written for the Julia programming language.
This seminar will be recorded and posted online one hour after the seminar. Please visit https://www.youtube.com/user/ICMEStudio to watch this talk.
We use mathematical modeling to explore the ramifications of targeting preventive disease measures to undernourished children. We consider a malaria model with superinfection and heterogeneous susceptibility, where a portion of this susceptibility is due to undernutrition (as measured by weight-for-age z scores). The portion of the total susceptibility that is due to undernutrition is estimated from a large randomized trial of supplementary feeding. We compute the malaria morbidity and mortality for a variety of policies involving supplementary food and insecticide treated nets.
Title: Fast and flexible linear algebra in Julia
Abstract: Applied scientists often develop computer programs exploratively, where data examination, manipulation, visualization and code development are tightly coupled. Traditionally, the programming languages used are slow, with performance critical computations relegated to library code written in languages on the other side of Ousterhout's dichotomy, e.g. LAPACK. I will introduce the Julia programming language and argue that it is well suited for computational linear algebra. Julia provides features for exploratory program development, but the language itself can be almost as fast as C and Fortran. Furthermore, Julia's rich type system makes it possible to extend linear algebra functions with user defined element types, such as finite fields or strings with algebraic structured attached. I will show examples of Julia programs that are relatively simple, yet fast and flexible at the same time. Finally, the potential and challenges for parallel linear algebra in Julia will be discussed.
Dealing with Thread Divergence in a GPU Monte Carlo Radiation Therapy Simulator
Deep learning and 2D probabilistic context free grammars for parsing images of math formulas
Guenther Walther is a professor in the Department of Mathematics at Stanford. Guenther Walther studied mathematics, economics, and computer science at the University of Karlsruhe in Germany and received his Ph.D. in Statistics from UC Berkeley in 1994. His research has focused on statistical methodology for detection problems, shape-restricted inference, and mixture analysis, and on statistical problems in astrophysics and in flow cytometry.
In this era of large-scale data, distributed systems built on top of clusters of commodity hardware provide cheap and reliable storage and scalable processing of massive data. In this talk, we review recent work on developing and implementing randomized matrix algorithms in large-scale parallel and distributed environments. Our main focus is on the underlying theory and practical implementation of random projection and random sampling algorithms for very large very overdetermined least squares regression problems. Theoretical results demonstrate that in near input-sparsity time and with only a few passes through the data one can obtain very strong relative-error approximate solutions, with high probability. We evaluate the performance of these algorithms on terabyte-sized data in existing distributed systems using Spark. These empirical results highlight the importance of various trade-offs (e.g., between the time to construct an embedding and the conditioning quality of the embedding, between the relative importance of computation versus communication, etc.) and demonstrate that least squares problems can be solved to low, medium, or high precision.
Eric Darve, Associate Professor of Mechanical Engineering at Stanford, will be talking about various projects in his group including fast linear solvers, hierarchical matrices, and task-based parallel programming systems.
The problem of solving a nonsmooth nonconvex program with nonsmooth constraints differs significantly from its smooth counterpart. In case the objective and constraint functions are no longer C^2 but only locally Lipschitz, the first-order optimality conditions no longer lead to a system of equations but to a set relation involving subdifferentials. This has the consequence that algorithmic approaches that differ from the smooth case have to be taken for computing solutions. One of the most promising classes of solvers are bundle methods.
In this talk I will present the first step on a roadmap towards an algorithm for solving general nonsmooth nonconvex programs. Taking inspiration from the SQP-method for smooth optimization we develop a second-order bundle method for minimizing a nonsmooth objective function subject to nonsmooth inequality constraints starting from a strictly feasible point. Instead of using a penalty function or filter or a merit function to deal with the constraints, we determine the search direction by solving a convex quadratically constrained quadratic program to obtain good iteration points. Furthermore, global convergence of the method is proved under certain mild assumptions. For a concrete implementation numerical results will be presented, as well as an application to certificates of infeasibility and exclusion boxes for numerical constraint satisfaction problems.
Bio: Hermann Schichl is currently an associate professor at the computer mathematics group at the faculty of mathematics of the University of Vienna, Austria. He received his PhD working on infinite-dimensional differential geometry at the same university in 1998. His research interests include mixed-integer global optimization, nonsmooth optimization, mathematical modeling, rigorous computing, operations research, and computational science. He is the main developer of the COCONUT environment, a modular software platform for global optimization algorithms.
The talk will cover basics of satellite electric propulsion and focus on recent developments in hardware design and integration, understanding of the fundamental plasma effects, and discuss existing modeling capability and challenges to be addressed in future research.
Interpolatory factorizations provide alternatives to the singular value decomposition for low-rank matrix approximations; this class includes the CUR factorization, where the C and R matrices are formed from subsets of columns and rows of the matrix being approximated. While interpolatory approximations lack the SVD's optimality, their ingredients are easier to interpret than singular vectors: since they come directly from the matrix itself, they inherit the data's key properties (e.g., nonnegative/integer values, sparsity, etc.). We shall provide an overview of these approximate factorizations, describe how they can be analyzed using interpolatory projectors, and introduce a new method for their construction based on the Discrete Empirical Interpolation Method (DEIM). This talk describes joint work with Dan Sorensen (Rice).
NOTE: This talk will replace the CME 500 seminar on 2/2 and will be held in a different location.
CME 500 will be replaced by the Distinguished Speaker Series on 2/2.
For challenging numerical problems, William Kahan has said that "default evaluation in Quad is the humane option". Fortunately the gfortran compiler allows us to change "real(8)" to "real(16)" everywhere. This is the humane option for producing Quad-precision software.
We describe experiments on multiscale linear and nonlinear optimization problems using a Quad implementation of MINOS. On a range of examples we find that 34-digit Quad floating-point achieves exceptionally small primal and dual infeasibilities (of order 1e-30) when "only" 1e-15 is requested.
Andrew Spakowitz is an Associate Professor of Chemical Engineering and (by courtesy) of Materials Science and Engineering and of Applied Physics at Stanford University. The Spakowitz lab is engaged in projects that address fundamental chemical and physical processes that underlie a range of key biological mysteries and cutting-edge materials applications. Current research in our lab focuses on three main research themes: DNA Biophysics, Protein Self Assembly, and Charge Transport in Conjugated Polymers. These broad research areas offer complementary perspectives on chemical and physical processes, and we leverage this complementarity throughout our research. Our approach draws from a diverse range of theoretical and computational methods, including analytical theory of semiflexible polymers, polymer field theory, continuum elastic mechanics, Brownian dynamics simulation, equilibrium and dynamic Monte Carlo simulations, and analytical theory and numerical simulations of reaction-diffusion phenomena. A common thread in our work is the need to capture phenomena over many length and time scales, and our flexibility in research methodologies allows us to address these problems at an unprecedented level of precision.
Building blocks for resilient application
BIG MATH IN SMALL COMPANIES: A Quant Perspective on Entrepreneurship.
ICME TGIF Seminar. January 23rd 4-5pm, Y2E2 Room 111.
Hear from ICME alumni and others who have founded or joined start-ups after grad school.
Panelists will include:
The Tall-Skinny QR (TSQR) algorithm is more communication efficient than the standard Householder algorithm for QR decomposition of dense matrices with many more rows than columns. However, TSQR produces a different representation of the orthogonal factor and therefore requires more software development to support the new representation. Further, implicitly applying the orthogonal factor to the trailing matrix in the context of factoring a square matrix is more complicated and costly than with the Householder representation.
In this talk, I'll show how to perform TSQR and then reconstruct the Householder vector representation with the same asymptotic communication efficiency and little extra computational cost. I'll discuss the high performance and numerical stability of this algorithm both theoretically and empirically. The new Householder reconstruction algorithm allows us to design more efficient parallel QR algorithms, with significantly lower latency cost compared to Householder QR and lower bandwidth and latency costs compared with Communication-Avoiding QR (CAQR) algorithm. As a result, our final parallel QR algorithm outperforms ScaLAPACK and Elemental implementations of Householder QR as well as our implementation of CAQR on Cray XE6 and XC30 systems at NERSC.
Bio: Grey Ballard is currently a Truman Fellow at Sandia National Labs in Livermore, CA. He received his PhD in 2013 from UC Berkeley. He worked in the BeBOP group and Parallel Computing Laboratory under advisor James Demmel. His research interests include numerical linear algebra, high performance computing, and computational science, particularly in developing algorithmic ideas that translate to improved implementations and more efficient software. His work has been recognized with the SIAM Linear Algebra Prize and two conference best paper awards, at SPAA and IPDPS, he received the C.V. Ramamoorthy Distinguished Research Award at UC Berkeley, and his PhD thesis was recognized by the ACM Doctoral Dissertation Award - Honorable Mention.
Sanjeeb Bose is a computational scientist at Cascade Technologies and a Consulting Assistant Professor in the Institute for Computational and Mathematical Engineering. His research expertise is in the areas of modeling and high fidelity simulation of complex turbulent flows and large scale parallel computing. Large-eddy simulations of turbulent flows to predict unsteady phenomenon such as boundary layer separation and heat transfer for energy systems will be presented. The design of efficient and scalable algorithms (including scalability results up to 786,000 processors) and approaches to quantifying errors in numerical simulations of turbulent flows will also be briefly discussed.
Nick Henderson is a Research Associate in the Institute for Computational and Mathematical Engineering. He received his Ph.D. degree from Stanford University in 2012. Dr. Henderson was one of six awardees of "Most Excellent Young Researchers Presentation" at the Joint International Conference on Supercomputing in Nuclear Applications in 2013.
Therefore, we present a reduced non-intrusive spectral projection (NISP) method for uncertainty propagation which addresses the COD and facilitates a reuse of deterministic solver modules. Our method is a modification of the standard NISP method with the intermediate construction of reduced dimensional (and order) approximations of the input data entering each respective module. Assuming a generalized polynomial chaos (gPC) approximation of the raw input data, the construction methods are based on straightforward linear algebraic computations, the costs of which are negligible in comparison to repeated module calls. We implement the reduced NISP method on some benchmark problems and demonstrate its performance gains over the standard NISP method.