About the Speaker: Gill Bejerano is an Associate Professor of Developmental Biology, of Computer Science and of Pediatrics (Genetics).
The Bejerano Lab studies genome function in human and related species. In the lab, Gill and his colleagues are deeply interested in the following broad questions: Mapping genome sequence (variation) to phenotype (differences) and extracting specific genetic insights from deep sequencing measurements. The lab takes a particular interest in gene cis regulation. They use their joint affiliation to apply a combination of computational and experimental approaches. They collect large scale experimental data; write computational analysis tools; run them massively to discover the most exciting testable hypotheses; which they proceed to experimentally validate. Gill and his colleagues work in small teams, in house or with close collaborators of experimentalists and computational tool users who interact directly with our computational tool builders.
About the Speaker: Fayadhoi Ibrahima is a PhD candidate in the Institute for Computational and Mathematical Engineering (ICME) working under the supervision of Professor Hamdi Tchelepi in the Energy Resources Engineering (ERE) department on uncertainty quantification for oil recovery in reservoir engineering. Prior to starting his PhD research, Fayadhoi has been putting efforts in building a coherent understanding of numerical analysis and probability at a strong level acquired in 3 universities: the University Pierre and Marie Curie (UPMC), the Ecole Centrale Paris (ECP) [both in France] and Stanford University. Fayadhoi was born and raised in France, but his parents are from the Comoros Islands. Fayadhoi enjoys traveling, teaching and running.
About the Speaker: Fred Luskin, Ph.D., is the Director of the Stanford University Forgiveness Projects, a Senior Consultant in Health Promotion at Stanford University, and a Professor at the Institute for Transpersonal Psychology, as well as an affiliate faculty member of the Greater Good Science Center. He is the author of Forgive for Good: A Proven Prescription for Health and Happiness (HarperSanFrancisco, 2001) and Stress Free for Good: Ten Proven Life Skills for Health and Happiness (HarperSanFrancisco, 2005), with Kenneth Pelletier, Ph.D.
About the Speaker: Doug L. James is a Full Professor of Computer Science at Stanford University since June 2015, and was previously an Associate Professor of Computer Science at Cornell University from 20062015. He holds three degrees in applied mathematics, including a Ph.D. in 2001 from the University of British Columbia. In 2002 he joined the School of Computer Science at Carnegie Mellon University as an Assistant Professor, before joining Cornell in 2006. His research interests include computer graphics, computer sound, physically based animation, and reducedorder physics models. Doug is a recipient of a National Science Foundation CAREER award, and a fellow of both the Alfred P. Sloan Foundation and the Guggenheim Foundation. He recently received a Technical Achievement Award from The Academy of Motion Picture Arts and Sciences for "Wavelet Turbulence," and the Katayanagi Emerging Leadership Prize from Carnegie Mellon University and Tokyo University of Technology. He was the Technical Papers Program Chair of ACM SIGGRAPH 2015, and is currently a consulting Senior Research Scientist at Pixar Animation Studios.
Join ICME faculty for a discussion on “All About Advisors”:
About the Speaker: Virginia William is an Assistant Professor of Computer Science at Stanford University. Her research applies combinatorial and graph theoretic tools to various computational domains. Her recent work has focused on the following domains:
(i) designing algorithms for shortest paths, pattern detection and other computational problems in graphs and matrices
(ii) reducing fundamental computational problems to one another in a finegrained way, sometimes showing equivalences
(iii) studying how much graph distance information can be compressed, and computational issues in social choice: when and how can one efficiently manipulate elections, tournaments and competitions, how to measure the quality of a voting rule, etc.
Title: Adaptive, LimitedMemory BFGS Algorithms for Unconstrained Optimization
Abstract: The limitedmemory BFGS method (LBFGS) has become the workhorse optimization strategy for many largescale nonlinear optimization problems. A major difficulty with LBFGS is that, although the memory size M can have a great effect on performance, it is difficult to know which size will work best. Importantly, a larger M does not necessarily improve performance, but may in fact degrade it. There is no guidance in the literature on how to choose M. In this talk, we briefly review LBFGS and then suggest two computationally efficient ways to measure the effectiveness of various memory sizes, thus allowing us to adaptively choose a different M at each iteration. The numerical success of these two adaptive strategies suggested ways to extend them, which we briefly consider. Our numerical results illustrate that our approach improves the performance of the LBFGS method, and indicate some further directions for research
Title: Polynomial and Rational Approximation Techniques for Nonintrusive Uncertainty Quantification
Abstract: With the growth of computing power in recent decades, uncertainty quantification (UQ) for numerical simulations of engineering systems has attracted significant attention. Most nonintrusive UQ methods are concerned with running simulations with several values of the input uncertain/design parameters, and using the output to construct an accurate and efficient surrogate that describes the behavior of the quantity of interest as a function of the sources of uncertainty. The fundamental underlying problem is that of approximating a function from its values at a discrete set of points in its domain.
In the first portion of the talk, we will discuss a nonintrusive polynomial chaos expansion (PCE) technique, in which we use weighted least squares to construct a multivariate polynomial surrogate. The quality of the approximation depends crucially on the location of the points at which the function is evaluated. We present a novel optimization based method for finding the best points for this type of approximation. In the second portion, we discuss the problem where we are not free in choosing the points at which the function will be evaluated. For this scenario, we introduce an efficient rational interpolation scheme based on the FloaterHormann rational interpolation. We present theoretical guarantees regarding the accuracy and stability of this scheme, and verify its efficiency when compared to similar methods in the literature.
Location: 520131
Time: 2 p.m.
About the Speaker: Jack Poulson is an Assistant Professor of Mathematics and Member of the Institute of Computational and Mathematical Engineering at Stanford University and completed his Ph.D. in Computational and Applied Mathematics at The University of Texas at Austin at the end of 2012. His current research is focused on developing efficient distributedmemory algorithms for conic Interior Point Methods (especially for SecondOrder Cone Programs) and lattice reduction techniques (such as BKZ 2.0). Said research is publicly performed within the library Elemental (https://github.com/elemental/Elemental).
About the Speaker: Anile Damle is a PhD candidate in the Institute for Computational and Mathematical Engineering (ICME) at Stanford University. His general interests include numerical linear algebra, nonlinear approximations, matrix analysis, and fast algorithms for structured matrices. Anil's current research projects focus on localization of KohnSham orbitals, updating of certain tree based matrix factorizations, and nonnegative matrix factorizations. Visit Anil's personal webpage at: http://web.stanford.edu/~damle/
About the Speaker: Jure Leskovec is an Assistant Professor of Computer Science at Stanford University, where he is a member of the InfoLab and Artificial Intelligence Lab. Jure also works as a Chief Scientist at Pinterest, where he focuses on machine learning problems. He cofounded a machine learning startup called Kosei, which was acquired by Pinterest.
About the Speaker: Victor Minden is a PhD student at the Institute for Computational and Mathematical Engineering (ICME) at Stanford University, where he works with Lexing Ying. His research concerns fast algorithms for scientific computing, in particular fast linear algebra on rankstructured matrices.
About the Speaker: Percy Liang is an Assistant Professor of Computer Science and, by courtesy, of Statistics at Stanford University. He is affiliated with Stanford Artificial Intelligence Lab and the Stanford Natural Language Processing Group. His two aims in research include: (i) creating a software that allow humans to communicate with computers and (ii) developing algorithms that can infer latent structures from raw data. Percy broadly identifies with the machine learning (ICML, NIPS) and natural language processing (ACL, NAACL, EMNLP) communities. Percy is also a strong proponent of efficient and reproducible research. He develops CodaLab Worksheets in collaboration with Microsoft Research, a new platform that allows researchers to maintain the full provenance of an experiment from raw data to final results.
About the Speaker: Dr. Laura Grigori received a Ph.D. in Computer Science (December 2001) from Université Henri Poincaré, France, INRIA Lorraine. After spending two years at UC Berkeley and LBNL as a postdoctoral researcher, she joined INRIA in January 2004. Laura was a member of Sage group at INRIA Rennes and GrandLarge group at INRIA Saclay  Ile de France and LRI, Paris 11 University. Since January 2013, she has been leading Alpines, a joint group between INRIA Paris  Rocquencourt and J.L. Lions Laboratory, UPMC. Laura's research interests include numerical linear algebra, high performance computing for scientific applications, sparse matrix computations, combinatorial scientific computing, and mathematical software.
Join Nick Henderson, Research Associate at ICME, in this informal discussion of research, teaching, and fun times in ICME!
About the Speaker: Nick Henderson is a Research Associate and Instructor at ICME. He is also affiliated with the CUDA Center of Excellence, where he collaborates with Stanford Linear Accelerator Center (SLAC) and High Energey Accelerator Research Organization (KEK) in developing GPUbased algorithms and codes to accelerate the simulation of particles travelling through and interacting with material. More of Nick's current work can be found on his webpage: http://stanford.edu/~nwh/
About the Speaker: Sergio is a PhD student in Computational Mathematics at Stanford (ICME). His interests cover the broad fields of convex optimization and statistics, particularly machine learning and approximation algorithms. Sergio holds a Master's degree in Mathematics from Universidad de los Andes in Colombia, where he worked under the advice of Professor Mauricio Velasco on copositive optimization and semidefinite relaxations to find the independence number of graphs.Sergio also holds a Bachelor’s degree in Mathematics and Economics. His thesis focused on optimal bandwidths for kernel classification under the advice of Prof. Adolfo Quiroz.
About the Speaker: Miles is a fourthyear Ph.D. candidate in Operations Research at MIT. He received his B.S. in Applied Mathematics and M.S. in Statistics from the University of Chicago in 2011. After graduating, he spent a year as a researcher at Argonne National Laboratory before starting at MIT. His research interests span diverse areas of mathematical optimization, with a unifying theme of developing new methodologies for largescale optimization drawing from motivating applications in renewable energy. Miles has published work in chance constrained optimization, mixedinteger conic optimization, robust optimization, stochastic programming, algebraic modeling, automatic differentiation, numerical linear algebra, and parallel computing techniques for largescale problems.
Title: Spectrumrevealing Matrix Factorizations
Abstract: Lowrank matrix approximations have become of central importance in the era of big data. Efficient and effective methods for such approximations have been proposed in statistics, theoretical computer science, and optimization. In this talk, we establish spectrumrevealing matrix factorizations, a new framework for efficient and effective matrix approximations. These factorizations are variations of the more classical LU, QR, and Cholesky factorizations with row (and/or) column permutations but are competitive with the best matrix approximations in both theory and computational effectiveness. We also discuss extensions of these factorizations for efficient computations of the truncated SVD and solutions of nuclear norm minimization problems. We demonstrate the effectiveness of our approaches with numerical experiments with both synthetic and real data.
About the Speaker: Sanjeeb Bose holds a BS degree in Mechanical Engineering from the California Institute of Technology and a PhD in Mechanical Engineering from Stanford University. He was a recipient of the Department of Energy’s Computational Science Graduate Fellowship as a graduate student. He specializes in subgridscale and wall modeling for LES and algorithms for largescale parallel computing. Sanjeeb also contributes to the development of the core infrastructure and numerical methods in the Cascade flow solvers at Cascade Technologies.
Title: Scattered Data Interpretation via Weighted L1 Minimization
About the Speaker: Saman Ghili is a graduate student at Stanford University. He works on lowrank separated representations, a promising approach to handle computational models affected by a large number of uncertainties. He is focused on the study of the uncertainty introduced in reactive flows by imprecise specification of the reaction rates.
Title: Pricing ShortTerm Market Risk: Evidence from Weekly Options
Abstract: We study shortterm market risks implied by weekly S&P 500 index options. The introduction of weekly options has dramatically shifted the maturity profile of traded options over the last five years, with a substantial proportion now having expiry within one week. Economically, this reflects a desire among investors for actively managing their exposure to very shortterm risks. Such shortdated options provide an easy and direct way to study market volatility and jump risks. Unlike longerdated options, they are largely insensitive to the risk of intertemporal shifts in the economic environment, i.e., changes in the investment opportunity set. Adopting a novel general seminonparametric approach, we uncover variation in the shape of the negative market jump tail risk which is not spanned by market volatility. Incidents of such tail shape shifts coincide with serious mispricing of standard parametric models for longerdated options. As such, our approach allows for easy identification of periods of heightened concerns about negative tail events on the market that are not always “signaled” by the level of market volatility and elude standard asset pricing models.
About the Speaker: Andrew W. Lo is the Charles E. and Susan T. Harris Professor at the MIT Sloan School of Management and director of the MIT Laboratory for Financial Engineering. He received his Ph.D. in economics from Harvard University in 1984. Before joining MIT’s finance faculty in 1988, he taught at the University of Pennsylvania’s Wharton School as the W.P. Carey Assistant Professor of Finance from 1984 to 1987, and as the W.P. Carey Associate Professor of Finance from 1987 to 1988.
Title: Conditionals
Abstract: Conditionals (roughly, "if... then..." statements) are useful in ordinary inferences about the world, in decision making and decision theory, in reasoning on the basis of theories, and in philosophy. I canvass a variety of accounts of conditionals, including selection semantics, a material conditional account, an account on which conditionals lack truth values... and argue that all of them can be understood as instances of the same general template, with different ways of filling in the details.
ICME will host four tech workshops on Friday, November 13th as a part of the ICME Xtend annual event. Whether you are participating in the student recruiting at ICME Xtend or not  you are welcome to join us for the following tech talks.
As room is limited, please RSVP by this Wednesday, November 11th to ensure that we have space available:
9:30 10:30 a.m.
Choose between the following (RSVP required):
10:30 11:30 a.m.
Choose between the following (RSVP required)
Title: Nonlinear Model Reduction: Discrete Optimality and Time Parallelism
ICME Xtend returns for a second year in 2015.
Our main event will take place on Thursday and Friday, November 1213, 2015. This oneofakind event brings ICME students and faculty together with our partners from industry and national labs for two days of networking and recruiting, discussions on current trends in our fields, workshops, and mixers.
For details on special preXtend events, please see the 'more information' section below.
Information for ICME students: Click here https://icme.stanford.edu/Xtend2015_students
Information for ICME partners: Click here https://icme.stanford.edu/Xtend2015_partners
Information on joining the ICME partnership programs: Click here https://icme.stanford.edu/industrialaffiliates
9:0011:00 a.m. 
Networking Breakfast 
Huang basement, Student Commons 
11:00 a.m. 5:00 p.m. 
Interviews 
See your schedule for specific appointments and locations 
5:00 p.m.7:30 p.m. 
Reception 
ICME lobby, Huang basement, Suite 060 
8:30 9:30 a.m. 
Breakfast 
ICME Lobby, Huang Basement, Suite 060 
9:30 10:30 a.m. 
Choose between the following (RSVP required) 

Interactive Data Visualization: The Face of Big Data 
HIVE Visualization Center, Huang B050 

Law, Order, and Algorithms 
Huang Building, Room 305 

10:30 11:30 a.m. 
Choose between the following (RSVP required) 

The Center for Financial and Risk Analytics (CFRA) 
HIVE Visualization Center, Huang B050 

Experiments on Networks 
Huang Building, Room 305 
Michael Saunders is a Research Professor in the Systems Optimization Laboratory at Stanford University. He obtained his PhD in Computer Science from Stanford in 1972 (advisor Gene Golub). He is known for contributions to mathematical software. He teaches LargeScale Numerical Optimization (MS&E318, CME338). Michael Saunders grew up in Christchurch, New Zealand, where he received a BSc (Honors) degree in Mathematics at the University of Canterbury. He was a Scientific Officer at the DSIR in Wellington, New Zealand, for the period 196678. He received his MS in 1970 and PhD 1972, both in Computer Science at Stanford University (advisor Gene Golub). He spent two years at the Stanford Operations Research Department in 197576, and he rejoined the department as a senior research associate in 1979. He was appointed Professor (Research) in 1987.
Title: Phase Retrieval, SelfCalibration, Random Matrices, and Convex Optimization
Abstract: I will demonstrate how two important but seemingly unrelated problems, namely Phase Retrieval and SelfCalibration, can be solved by using methods from random matrix theory and convex optimization.
Phase retrieval is the centuryold problem of reconstructing a function, such as a signal or image, from intensity measurements, typically from the modulus of a diffracted wave. Phase retrieval problems  which arise in numerous areas including Xray crystallography, astronomy, diffraction imaging, and quantum physics  are notoriously difficult to solve numerically. They also pervade many areas of mathematics, such as numerical analysis, harmonic analysis, algebraic geometry, combinatorics, and differential geometry.
Selfcalibration is an increasingly important concept, since the need for precise calibration of sensing devices manifests itself as a major roadblock in many scientific and technological endeavors. The idea of selfcalibration is to equip a hardware device with a smart algorithm that can compensate automatically for the lack of calibration. I will demonstrate how both phase retrieval and selfcalibration can be treated efficiently via convex programming by "lifting" the associated nonlinear inverse problem to an underdetermined linear problem. Using tools from random matrix theory and compressive sensing, we will see that for certain types of random measurements both problems can be solved exactlyvia a convenient semidefinite program. Applications in xray crystallography, array calibration, and the InternetofThings will be discussed.
Title: Assimilation and Propagation of Clinical Data Uncertainty In Cardiovascular Modeling
The inaugural Women in Data Science Conference will be held at Stanford University on Monday, November 2, 2015.
Click here to view the official event site: http://widsconference.org/
Thanks to amazing response, all tickets for this conference are sold out. Check back to this website soon for information on how you can be our 'virtual guests' for the conference via webstream. If you would like to receive wait list notifications, conference updates, and live streaming notifications, sign up for our mailing list.
Women in Data Science Conference
Our aim is to inspire, educate and support women in the field – from those just starting their journey to those who are established leaders in industry, academia, government and NGO’s.
Join us for this oneday technical conference to:
View the conference presentations here: https://icme.stanford.edu/WIDSpresentations
Title: Data and Physics Driven Physical Modeling
Abstract: As computing capabilities grow and the amount of experimental and numerical data increases, we can design computational strategies to automatically test and assess different modeling assumptions. We introduce a general datadriven statistical framework that bridges the gap between (numerical or laboratory) experimentation, physical modeling and uncertainty quantification. The framework enables the study of uncertainties and biases in physical models estimated from data. We differentiate between two types of modeling uncertainties and biases, the first one due to physical errors in the models and the second one due to noise introduced by the dataacquisition process. We also present different procedures to build models under different noise assumptions and propose a metric to quantify the quality of the data driven estimations. The framework is tested in the context of combustion science and chemical kinetics and driven by empirical data and simple reactive flow models.
Title: Efficient Parameter Estimation for multivariate JumpDiscussions
Abstract: This paper develops an unbiased and computationally efficient MonteCarlo estimator of the transition density of a multivariate jumpdiffusion process. The drift, volatility, jump intensity, and jump magnitude are allowed to be statedependent and nonaffine. Most importantly, it is not necessary that the variancecovariance matrix can be diagonalized using a change of variable or change of time. Our density estimator facilitates the parametric estimation of multivariate jump diffusion models based on low frequency data. The parameter estimators we propose have the same asymptotic behavior as maximum likelihood estimators under mild conditions that can be verified using our density estimator. Numerical case studies illustrate our results. Joint work with François Guay.
Speaker: David Bindel, Assistant Professor in the Department of Computer Science at Cornell University
Title: Model Reduction for EdgeWeighted Personalized PageRank
Abstract: Work on model reduction for fast computation of PageRank for graphs in which the edge weights depend on parameters will be described. For an example learningtorank application, our approach is nearly five orders of magnitude faster than the standard approach. This speed improvement enables interactive computation of a class of ranking results that previously could only be computed offline. While our approach draws on ideas common in model reduction for large physical simulations, the cost and accuracy tradeoffs for the edgeweighted PageRank problem are different, as we will describe.
This is joint work with Wenlei Xie, Johannes Gehrke, and Al Demers
Visit David Bindel's website at: http://www.cs.cornell.edu/~bindel/