Skip to main content Skip to secondary navigation
Main content start

Impact: Connecting Computational Math Research with Practice and in turn, Practice with its Consequences (CME 500)

Mondays 4:30-5:20 PM - April 3 to June 5, 2023

Event Details:

Monday, April 3, 2023 - Monday, June 5, 2023

Location

Stanford Campus - Y2E2 111
United States

The learning goal of this year spring seminar aptly named "Impact" is to connect computational math research with practice, and in turn, practice with its consequences.  In other words, while speakers cover technical topics in depth, we ask that they also motivate, to the extent possible, the "so what" of their work.  It will feature speakers from Stanford, ICME affiliate companies,  and ICME alumni.  The CME 500 Spring 2023 seminar series is open to all graduate students at Stanford.

All sessions listed below will be in-person only.

Not a registered student, but interested in attending? Contact us at icme-contact@stanford.edu to receive information on the seminar.

Schedule

Monday, April 3, 2023

  • -

    Introduction

    ICME Executive Director, Dr. Sanjay Saigal, will introduce the spring seminar series and discuss the learning journey that will take place over 9 weeks.

    Sanjay Saigal

    Executive Director of ICME

Monday, April 10, 2023

  • -

    Scientific Machine Learning in Industrial Settings and a Practical Example of the Use of Graph Neural Networks in Mechanics.

    Physics-informed machine learning, a subset of scientific machine learning (SciML), has shown great promise in the context of accelerating classical physics solvers and discovering new governing laws for complex physical systems. While the SciML activity in foundational research is growing exponentially, it lags in real-world utility, let alone its reliable, scalable integration into industrial pipelines. To achieve this, current SciML algorithms need to advance in maturity and validation, specifically towards operating in large-scale, three-dimensional, continuously evolving environments marked by noise, sparsity, stochasticity, and other complexities of the natural world problems. Seminar speaker, Marta D’Elia, will highlight some of the current challenges in applying SciML in an industrial context and, through a practical example, discuss the needs and the bottlenecks and propose possible strategies.

    Marta D'Elia holds a PhD in Applied Mathematics from Emory University and a Master in Mathematical Engineering from Politecnico di Milano. She is a Principal Scientist at Pasteur Labs and an Adjunct Professor at Stanford ICME. Prior to this, she was a Research Scientist and Tech Lead at Meta and a Principal Member of the Technical Staff at Sandia Labs, where she worked for eight years. Her expertise is in the field of Scientific Computing, Scientific Machine Learning, Optimization, and Design. She's an associate editor of SISC, Nature Scientific Reports, and other computational math journals. She's also the co-founder of the One Nonlocal World Project. 

    Marta D'Elia

    Principal Scientist at Pasteur Labs and Adjunct Professor at Stanford ICME

Monday, April 17, 2023

  • -

    A Career in High-Performance Computing

    One’s work and one’s career can succeed in being entertaining and fulfilling. Or in filling one’s bank account. Or in “having an impact”. One hopes the impact is to enable others to succeed by providing knowledge, insight, and tools to be used. I didn’t think about impact much through my career – only at problems I could solve. I looked for problems to solve and things to build wherever I could find an open issue that others were interested in, where progress was slow, hadn’t been solved before, and for which I believed I had some lever with which to open up the problem and find a solution.  One of my mentors, Herb Keller, gave me great advice: Work on hard problems. Sometimes this worked and sometimes it didn’t. Some of the technical successes had little impact and some are used every day. Some were done alone, some with one or two colleagues, and some with big teams.

    I’ll talk over some of these efforts, why I found them interesting, how they were tackled, and what impact they’ve had. I’ll talk about math and software; about CFD and supercomputers; Matlab and adding sparse matrices to it; and about how MPI bridged the gap between computational math and computer systems, networking in particular. I’ll discuss other impact enabling assets one can acquire by practice and study, specifically the communication skills that can enhance the value of one’s contributions.

    Rob Schreiber is a technical fellow at Cerebras Systems, Inc., where he works on architecture and programming of systems for AI and for computational science. Before Cerebras he taught at Stanford and RPI and worked at NASA, at startups, and at HP.  Schreiber’s research spans sequential and parallel algorithms for matrix computation, compiler optimization for parallel languages, and high performance computer design. With Moler and Gilbert, he developed the sparse matrix extension of Matlab.  He created the NAS CG parallel benchmark.  He was a designer of the High Performance Fortran language. Rob led the development at HP of a system for synthesis of custom hardware accelerators.  He has helped pioneer the exploitation of photonic signaling in processors and networks. He is an ACM Fellow, a SIAM Fellow, and was awarded, in 2012, the Career Prize from the SIAM Activity Group in Supercomputing. 

    Rob Schreiber

    Distinguished Engineer at Cerebras Systems Inc.

Monday, April 24, 2023

  • -

    The Age of Phenomics

    Phenomics applies artificial intelligence and information retrieval to analyze biological systems from first principles.  The technique observes the source code of life (genomic sequences), connected to a sparse time series of epigenetic and environmental data, to detect health transition patterns.  This approach was first tested in a NASA twin astronaut study, later honed for liquid biopsies in recent years. While the cost was the tens of millions with NASA to thousands for digital biopsies, the price point is dropping precipitously thanks to advances in AI, genomics sequencing, and cloud computing.   Soon we'll be able to sequence a patient for $50, using AI and IR to detect early transitions, then recommend simple actions to maintain optimal health. This talk explores these first principles, provides a quick survey of state of the art, and posits future developments.

    Scott Penberthy focuses on the transformation of healthcare to a consumer-first, data-driven, experience powered by AI, next generation sequencing, and cloud computing. Scott believes cancer will soon be a manageable condition, powered by our ability to understand, debug and repair the source code of life. Therapeutics also become code, customized to the individual, powered by nanoscale physics that are analyzed by hyperscale AI and IR. This is the Age of Phenomics.

    He is a member of Google Cloud’s CTO office (OCTO), a team of industry ex-CTOs who co-innovate with top customers and product teams. Scott reframes scientific processes as “tensors in, tensors out,” extracts data before and after a complex system, then builds and trains ML models to replicate, improve and optimize our best human efforts. He currently leads a broad team across Google Search, Research, and Google Cloud focused on precision healthcare experiences.

    Scott has demonstrated efficacy in applying AI and information retrieval to precision medicine, reverse engineering mainframe systems in healthcare claims processing, searching for exoplanets and minerals on the moon with NASA, and cutting the costs in millions of customer chat & voice interactions. Scott was blessed with the opportunity to build an amazing AI team within the CTO Office, where the team instigated call center AI, document AI, AI notebooks, and many of Google’s largest AI-first cloud deals. Scott holds a PhD in AI with multiple degrees from MIT and the University of Washington. Previously, Scott landed public cloud at PwC for 200k employees in 2014, moved a video site for 5m users to AWS in 2008, sold a social photo site with 50M users in 2007, built mobile phone “widgets” in 2005, and launched a $13B web middleware and $4B web hosting business in the 90s. Scott enjoyed working directly with the CEO of IBM, Lou Gerstner, during Lou's turnaround of Big Blue. Years later he had a similar role with Bob Moritz, CEO and Chairman of PwC. Scott is an avid programmer, triathlete, space fan, guitarist, chef and father of two amazing daughters. Occasionally, Scott speaks in public.

    Scott Penberthy

    Director, Applied AI at Google

Monday, May 1, 2023

  • -

    Training efficient networks using "over-parameterized optimization" or "knowledge distillation."

    Efficient machine learning models play a critical role in enabling on-device experiences. However, there is a trade-off between computational efficiency, such as latency, memory footprint, and power consumption, and model accuracy, where increased compression leads to lower accuracy. In this presentation, we will review recent approaches to optimize the efficiency-accuracy trade-off. Specifically, we will explore how we can harness the power of over-parameterization to facilitate learning and create more accurate and compact models. Join us to discover the latest strategies that can enhance on-device experiences with efficient machine learning models.

    Hadi Pouransari is a Machine Learning Researcher and tech lead at Apple's MIND team. His interests include distributed large-scale training, optimization, and efficient models. Previously, he was a senior ML engineer at Apple Special Project Group. He earned his PhD in 2017 from ICME under the supervision of Professors Darve and Mani, with a PhD minor in Computer Science. His thesis on fast numerical linear algebra with applications to scientific computing was awarded the Juan Simo thesis award. Hadi completed his dual degrees in Computer Science and Mechanical Engineering from Sharif University in 2011.

    Hadi Pouransari

    AI/ML Researcher at Apple

Monday, May 8, 2023

  • -

    From ICME to Applied Optimization in Industry

    As an analytics consultant at Accenture, Nicole has had the opportunity to tackle fascinating and impactful real-world problems that leverage the diverse and valuable skills taught at ICME. In this talk, Nicole will take you on a journey through some of her recent consulting projects. You'll get an inside look at how she increased efficiency and reduced costs at a major US bakery using network design optimization, how she used mixed-integer scheduling techniques to help a startup glass manufacturing company streamline their production of bottles, and how she worked with a leading tech company to optimally allocate spare parts across global data centers using automated processes. Those three projects alone have been calculated to make a total impact of over $10 million. Whether you're a graduate student, faculty member, or simply interested in the application of computational mathematics, you'll leave this talk with a deeper understanding of how computational and mathematical engineering techniques are making a real impact in the world.

    Nicole Rahmani graduated with her PhD from ICME in 2012 with a focus on optimization. After graduation, Nicole spent 4 years at IBM Research Ireland as an Optimization Research Scientist before joining End-to-End Analytics (E2E), an analytics consulting company based in Palo Alto. In 2021, E2E became part of Accenture, where Nicole is now a Director of Data Science. She’s spent her career implementing analytics and optimization solutions across Fortune 500 companies to help improve their business processes. Nicole continues to work on fun and challenging real-world analytics projects in a wide range of industry applications. Nicole is currently the Principal Director of Data Science at Accenture

    Nicole Rahmani

    Principal Director of Data Science at Accenture

Monday, May 15, 2023

  • -

    What is Responsible AI? A Practitioner’s Perspective

    In this talk we provide an introduction to Responsible AI from a practitioner’s point of view. We take a first look at Responsible AI and its relevance by examining five of its Pillars. We then do a deeper dive into two of them: fairness and transparency. We finish by zooming-out and sharing both some lessons learned from 5 years of Responsible AI at Meta, and some challenges ahead.

    Esteban D. Arcaute is currently the Senior Director leading Responsible AI at Meta. Prior to that he spent three and a half years establishing and growing the Data Science and Data Engineering functions within Meta AI, being a member of Meta AI’s leadership team throughout. Before joining Meta he was responsible for Walmart’s search engine. Under his leadership, ML techniques were incorporated across the stack. 

     Esteban was born in Mexico city, he earned a Ph.D. and a M.S. from Stanford University in Computational and Mathematical Engineering, and holds two other masters, one in Electrical and Computer Engineering from Supeléc (France) and one in Applied Mathematics from the Université de Metz (France). He is an advocate for diversity in STEM, having co-founded the Women in Data Science Worldwide conference, and currently being on its Board of Directors.
     

    Esteban Arcaute

    Senior Director, Responsible AI at Meta

Monday, May 22, 2023

  • -

    Large Scale Computing in Banking: Opportunities and Challenges

    There is an abundance of instances where the banking industry relies on large scale computations and data processing to run the business smoothly. At JPMorgan Chase, a leading global financial services firm, these instances span from investment banking to retail banking as well as regulatory risk and reporting. Not only are these models expensive to run and require a considerable amount of investment but managing them inefficiently could result in lost business opportunities. In this talk, speakers from the Machine Learning Center of Excellence at JPMorgan Chase will highlight how they are leveraging innovations in machine learning to scale and unlock business value.  

    About Amit Varshney  

    Amit Varshney heads the QuantAI group at JPMorgan Chase’s Global Technology Center in Palo Alto, California. His team applies cutting-edge AI/ML research and technologies to solve some of the most challenging problems in mathematical finance. Prior to joining JPMorgan Chase, Amit worked at Google and MSCI, Inc., leading various initiatives around machine learning, quantitative finance on TensorFlow, and cross-asset derivatives pricing. Amit holds a Ph.D. in engineering from Penn State University and a bachelor's degree in engineering from the Indian Institute of Technology, Delhi. 

    About Leonard Eun  

    Len is a machine learning and quant researcher interested in solving traditional quant finance problems using applied machine learning, TensorFlow, and cloud based high performance computing technologies. Len has 10 years of experience in financial services, most of which was spent as a quant supporting interest rate trading desks. Len holds a Ph.D. in experimental particle physics from Penn State University, when he worked at a major particle accelerator experiment analyzing large datasets using both traditional and machine learning techniques. 


    About Moises Hernandez   
    Moises Hernandez is a senior AI & High-Performance Computing engineer working on the optimization of quant problems.  Before joining JPMorgan Chase, Moises worked as an AI DevTech Engineer at NVIDIA accelerating NLP applications and recommender systems on GPUs.  Previously, he was conducting research into brain connectivity, optimizing the analysis of diffusion MRI using GPUs. Moises received a Ph.D. in Neurosciences from Oxford University.  

    Amit Varshney 

    Head of QuantAI at J.P. Morgan

    Leonard Eun 

    VP of Applied AI ML at J.P. Morgan

    Moises Hernandez  

    VP of Applied AI ML and Quant AI at J.P. Morgan

Monday, May 29, 2023

  • -

    Memorial Day - no class

Monday, June 5, 2023

  • -

    Trajectory Optimization for Robots and Self-Driving Vehicles

    Robots and self-driving cars must rapidly compute their trajectories to respond to changes in their environments. This requires solving trajectory optimization problems in under 30 milliseconds. Roboticists have relied on differential dynamic programming and the iterative linear-quadratic regulator algorithm to efficiently solve these problems. In this talk, we use modern optimization theory to understand these algorithms. We apply differential dynamic programming to several optimal-control problems and show that the Riccati recursion is the fundamental computational kernel in use. We show how transformer-based neural networks can learn an initial guess for trajectory optimization---these networks can even replace optimization. Finally, we discuss the self-driving field: its hype cycle and its impact on everyday life.

    Christopher Maes currently leads the Trajectory Planning team for NVIDIA's self-driving car project. Christopher received his PhD from Stanford University in Computational and Mathematical Engineering in 2010. He then did a postdoc at MIT's Operations Research Center working on routing airplanes around thunderstorms. In 2012, he joined Gurobi Optimization as the first developer outside of the founders. At Gurobi, he worked on barrier crossover, presolve, and MIP heuristics. He created the Gurobi MATLAB interface, the Gurobi tuning tool, and the Gurobi Instant Cloud. In 2016, Christopher joined Apple's Special Projects Group in Autonomous Systems. At Apple, he developed real-time solvers for applications in planning and controls and managed the Trajectory Optimization team. He joined NVIDIA in 2022 as a Senior Engineering Manager working on optimization and machine learning within the Planning and Controls group.

    Christopher Maes

    Senior Engineering Manager at NVIDIA

Explore More Events