ICME Summer Workshops 2021 | Fundamentals of Data Science
2021 Summer Workshops will be online via Zoom Aug 2-20
ICME’s 6th annual Summer Workshop Series will offer a variety of virtual data science and AI courses, taught live via Zoom by world-renowned Stanford faculty and Stanford-affiliated instructors. The series is open to the general public worldwide. Discounts are offered to students, staff, and faculty from all schools as well as to ICME industry partners. Attendees completing four or more workshops can earn a Stanford ICME Fundamentals of Data Science Summer Workshops Certificate of Completion.
New this year, the series offers:
- New and Intermediate workshops such as Data Privacy and Ethics, Intermediate Topics in Machine Learning & Deep Learning, and Deep Learning for Natural Language Processing - Part II.
- Thirteen workshops over three weeks, from August 2-20, 2021.
- Half-day workshops (from either 8-11 am or 1-4 pm Pacific time) spread over two days.
Introduction to Mathematical Optimization [closed]
1-4 pm PDT
Mathematical optimization underpins many applications in science and engineering, as it provides a set of formal tools to compute the ‘best’ action, design, control, or model from a set of possibilities.
Data Privacy and Ethics [closed]
1-4 pm PDT
This workshop engages with difficult challenges in the modern practice of data science and the design of data products. We will begin by discussing the promises and perils of mining digital exhaust: location, transaction, social media, and other data types that are increasingly recorded and accessible within digital platforms.
Introduction to High Performance Computing [closed]
1-4 pm PDT
This workshop explores the features of three key parallel programming approaches, OpenMP, CUDA, and MPI; it will explain their underlying philosophy and how they are adapted to different computer architectures.
Deep Learning for Natural Language Processing - Part II [closed]
8-11 am PDT
This workshop will introduce common practical use cases where natural language processing (NLP) models are applied using the latest advances in deep learning (e.g. Transformer-based models such as BERT).