May 15, 2017 - 4:30pm to June 5, 2017 - 5:30pm

*Talks (CME 500), features exceptional researchers from across the U.S. (and globally) working in new and exciting areas of computational science and engineering.

**SPRING 2017 SCHEDULE**

**Mondays, 4:30-5:20 PM at Building 300, Room 300 (unless otherwise noted)**

**April 17: Nihar Shah,**

*University of California, Berkeley*Title: Learning from People

Abstract: Learning from people represents a new and expanding frontier for data science. Two critical challenges in this domain are of developing algorithms for robust learning and designing incentive mechanisms for eliciting high-quality data. In this talk, I describe progress on these challenges in the context of two canonical settings, namely those of ranking and classification. In addressing the first challenge, I introduce a class of "permutation-based" models that are considerably richer than classical models, and present algorithms for estimation that are both rate-optimal and significantly more robust than prior state-of-the-art methods. I also discuss how these estimators automatically adapt and are simultaneously also rate-optimal over the classical models, thereby enjoying a surprising a win-win in the bias-variance tradeoff. As for the second challenge, I present a class of "multiplicative" incentive mechanisms, and show that they are the unique mechanisms that can guarantee honest responses. Extensive experiments on a popular crowdsourcing platform reveal that the theoretical guarantees of robustness and efficiency indeed translate to practice, yielding several-fold improvements over prior art.

**April 24: Andrew Stuart,**

*California Institute of Technology --*NOTE: This seminar will be held in 420-041.Title: Uncertainty Quantification in the Classification of High Dimensional Data

Abstract: We provide a unified framework for graph based semi-supervised learning which brings together a variety of methods which have been introduced in different communities within the mathematical sciences; the unification is through an inverse problem formulation. We study the probit (from machine learning), Bayesian level-set (from inverse problems) and Ginzburg-Landau (from applied math) methods; we also show that the probit and level set approaches are natural relaxations of the harmonic function (kriging) approach introduced in machine learning. We introduce efficient numerical methods, suited to large data-sets, for both MCMC-based sampling as well as gradient descent-based MAP estimation. We conclude by studying continuum limits of the problem formulations, and algorithms, arising in the large data limit.

WATCH THE RECORDED TALK: PART 1 - PART 2 - PART 3

**May 1: Rebecca Morrison,**

*Massachusetts Institute of Technology*Title: Beyond normality: Learning sparse probabilistic graphical models in the non-Gaussian setting

Abstract: Data-driven models provide immense flexibility, the ability to leverage most or all the information content of the data, and can be computed and calibrated relatively quickly. However, the data at hand may not in fact conform to common assumptions, such as normality. In this talk, I present recent work that aims to push the frontier of data-driven modeling into more physically realistic scenarios. We develop an algorithm to identify sparse dependence structure in continuous and non-Gaussian high-dimensional distributions, given a corresponding set of data. The conditional independence structure of an arbitrary distribution can be represented as an undirected graph (or Markov network), but most algorithms for learning this structure are restricted to the discrete or Gaussian cases. Our new approach allows for more realistic and accurate descriptions of the distribution in question, and in turn better estimates of its sparse structure. Sparsity in the graph is of interest as it can accelerate inference, improve sampling methods, and reveal important dependencies between variables. The algorithm relies on exploiting the connection between the sparsity of the graph and the sparsity of transport maps, which deterministically couple one probability measure to another. Moreover, finding such sparse structure is motivated by the need to build more efficient, coupled, multi-physics models.

**May 8: Amir Gholami,**

*University of Texas at Austin*Title: Fast algorithms for inverse problems with parabolic PDE constraints with application to biophysics-based image analysis

Abstract: I will present a parallel distributed-memory framework for coupling biophysical models with medical image analysis. The target application is an image-driven inverse brain tumor growth model and an image registration problem, the combination of which can eventually help in diagnosis and prognosis of brain tumors. Our algorithm integrates several components: a spectral discretization in space, analytic adjoints, a highly optimized distributed fast Fourier transform, and a novel parallel cubic interpolation algorithm for an unconditionally stable semi-Lagrangian time-stepping scheme.

I will present efficiency and scalability results for the computational kernels, the inverse tumor and image registration solvers on two x86 systems, Lonestar 5 at the Texas Advanced Computing Center and Hazel Hen at the Stuttgart High Performance Computing Center. I will showcase results which demonstrate that our solver can be used to solve registration problems of unprecedented scale, resulting in ∼ 200 billion unknowns—a problem size that is 64× larger than the state-of-the-art. For problem sizes of clinical interest, our solver is about 8× faster than the state-of-the-art.

Bio: Amir Gholami is a PhD candidate at the Institute for Computational Engineering and Sciences (ICES), working with Prof. George Biros at the University of Texas at Austin. He holds a B.Sc degree in Aerospace Engineering and a Master of Science degree in Mechanical Engineering. He is the recipient of the ACM Student Research Competition’s Gold Medal at SuperComputing 2015 (SC15), as well as a best student paper finalist in SC14 conference. His research interests involve high performance computing and its application to large scale inverse problems and distributed machine learning algorithms.

**May 15: Mengdi Wang,**

*Princeton University*Title: Stochastic First-Order Methods in Data Analysis and Reinforcement Learning

Abstract: Stochastic first-order methods provide a basic algorithmic tool for optimization, online learning and data analysis. In this talk, we survey several innovative applications including risk-averse optimization, online principal component analysis, and reinforcement learning. We will show that the convergence rate analysis of the stochastic optimization algorithms provide sample complexity analysis for the corresponding online learning applications. In particular, we will show some recent developments on stochastic primal-dual methods that apply to both the Markov decision problem and its online version - reinforcement learning. We will show that both the running-time complexity for the offline problem and the sample complexity for the online problem can be analyzed under the same framework.

**May 22: Kevin Carlberg,**

*Sandia*Title: Breaking computational barriers: Using data to enable extreme-scale simulations for many-query problems

Abstract: As physics-based simulation has played an increasingly important role in science and engineering, greater demands are being placed on model fidelity. This high fidelity necessitates fine spatiotemporal resolution, which can lead to extreme-scale models whose simulations consume months on thousands of computing cores. Further, most practical decision-making scenarios (e.g., uncertainty quantification, design optimization) are ‘many query’ in nature, as they require the (parameterized) model to be simulated thousands of times. This leads to a ‘computational barrier’: the high cost of extreme-scale simulations renders them impractical for many-query problems. In this talk, I will present several approaches that exploit simulation data to overcome this barrier. First, I will introduce nonlinear model reduction methods that employ spatial simulation data and subspace projection to reduce the dimensionality of nonlinear dynamical-system models while preserving critical dynamical-system properties such as discrete-time optimality, global conservation, and Lagrangian structure. Second, I will describe methods for data-driven error modeling, which apply regression methods from machine learning to construct an accurate, low-variance statistical model of the error incurred by model reduction. This quantifies the (epistemic) uncertainty introduced by reduced-order models and enables them to be rigorously integrated in uncertainty-quantification applications. Finally, I will present data-driven numerical solvers that use simulation data to improve the performance of linear/nonlinear solvers and time-integration methods. I will present two such approaches: an adaptive-discretization method that applies Krylov-subspace iteration or h-adaptivity to enrich an initial solution subspace extracted from spatial simulation data, and another that employs temporal simulation data to improve the convergence of parallel-in-time integration methods.

**June 5: Julia Ling,**

*Citrine Informatics (formerly Sandia)*Title: Machine Learning + Physics

Abstract: Machine learning comprises a set of data-driven algorithms that can operate on big, high dimensional data sets. These algorithms have been applied with great success to a variety of applications, including e-commerce, finance, and image recognition. This talk will discuss how these algorithms can be applied to scientific applications, in which there are known constraints and invariance properties. This talk will focus on two applications: one in turbulence modeling and another in materials science. In turbulence modeling, the multi-scale nature of the physics necessitates the use of constitutive closure models in simulations. With the increasing availability of large, high-fidelity data sets, there is now the possibility of using machine learning to provide more accurate closure models. In materials science, the discovery of new materials to meet stringent requirements involves exploration of a high dimensional composition space. Machine learning models with uncertainty estimates can be used accelerate this exploration process. A key facet of both these research projects has been the tight integration of scientific knowledge with data driven methods.