Skip to:

Upcoming Events

Enter one or more keywords to search
Filter events by type
Saturday, March 25, 2017 -
8:30am to 6:00pm

Machine Learning is evolving to utilize new hardware such as GPUs and large commodity clusters. University and industry researchers have been using these new computing platforms to scale machine learning across many dimensions.

This conference aims to bring together researchers running machine learning algorithms on a variety of computing platforms to foster discussions between them. The goal is to encourage algorithm designers for these platforms to help each other scale and transplant ideas between the platforms.

Speakers and Panelists

Jeff Dean (Google)
Scaled Machine Learning with TensorFlow and XLA
Ion Stoica (UC Berkeley and Databricks)
Distributed Machine Learning and the Berkeley RISE lab
Reza Zadeh (Stanford and Matroid)
Scaling Computer Vision at Matroid
Rajat Monga (Google)
Panel on Scaled ML
Ben Lorica (O'Reilly)
Panel on Scaled ML
Wes McKinney (Two Sigma)
Scaling Challenges in Pandas 2.0
David Ku (Microsoft)
Scaled Machine Learning at Microsoft
Ian Buck (NVIDIA)
Scaled Machine Learning on NVIDIA GPUs
Claudia Perlich (Dstillery)
Andy Feng (Yahoo)
TensorFlow on Apache Spark
DB Tsai (Netflix)
Panel on Scaled ML
Ziya Ma (Intel)
Scaling ML on Intel CPUs
Matei Zaharia (Stanford)
DAWN: Infrastructure for usable Machine Learning
Ilya Sutskever (OpenAI)
Scaling Reinforcement Learning

Schedule

Saturday March 25th 2017
========

  • 08:45-09:00 Reza Zadeh, Introduction
  • 09:00-10:00 Ion Stoica
  • 10:00-11:00 Reza Zadeh
  • 11:00-11:30 David Ku
  • 11:30-12:00 Matei Zaharia
  • 12:00-13:00 Lunch Break
  • 13:00-14:00 Jeff Dean
  • 14:00-15:00 Panel: Ziya Ma, Rajat Monga, DB Tsai, Ben Lorica
  • 15:00-15:30 Claudia Perlich
  • 15:30-16:00 Break
  • 16:00-16:30 Ilya Sutskever
  • 16:30-17:00 Wes Mckinney
  • 17:00-17:30 Ian Buck
  • 17:30-18:00 Andy Feng

Registration

Please register here: http://scaledml.org/

Friday, March 31, 2017 -
9:00am to 5:00pm

Overview

Friday, March 31, 2017
9:00-4:45pm
Stanford University

This workshop presents the basics behind the application of modern machine learning algorithms. We will discuss a framework for reasoning about when to apply various machine learning techniques, emphasizing questions of over-­fitting/under-­fitting, regularization, interpretability, supervised/unsupervised methods, and handling of missing data. The principles behind various algorithms—the why and how of using them—will be discussed, while some mathematical detail underlying the algorithms—including proofs—will not be discussed. Unsupervised machine learning algorithms presented will include k-­means clustering, principal component analysis (PCA), and independent component analysis (ICA). Supervised machine learning algorithms presented will include support vector machines (SVM), classification and regression trees (CART), boosting, bagging, and random forests. Imputation, the lasso, and cross-­validation concepts will also be covered. The R programming language will be used for examples, though participants need not have prior exposure to R. This workshop will last from 9:00 a.m.-4:45 p.m., four 75-minute sessions, separated by time for breaks. Please note that this is not a Stanford for-credit course.

Prerequisite: undergraduate-­level linear algebra and statistics; basic programming experience (R/Matlab/Python).

REGISTER HERE

Topics Include

  • Basic Concepts and Intro to Supervised Learning: linear and logistic regression
  • Penalties, regularization, sparsity (lasso, ridge, and elastic net)
  • Unsupervised learning: clustering (k-­means and hierarchical) and dimensionality reduction (Principal Component Analysis, Independent Component Analysis, Self-­Organizing Maps, Multi-­Dimensional Scaling)
  • Unsupervised Learning: NMF and text classification (bag of words model)
  • Supervised Learning: loss functions, cross-­validation (bias variance trade-­off and learning curves), imputation (K-­nearest neighbors and SVD), imbalanced data
  • Classification and Regression Trees (CART)
  • Ensemble methods (Boosting, Bagging, and Random Forests)
  • Support Vector Machines (SVM)
  • Deep learning: Neural Networks (Feed-Forward, Convolutional, Recurrent) and training algorithms

This workshop is open to participants 18 years and older. If you are under the age of 18 and would like to participate, please email icme-contact@stanford.edu

To register for this workshop, please visithttps://app.certain.com/profile/form/index.cfm?PKformID=0x2531409ffbb


Instructors

Alex Ioannidis

Alexander is a PhD candidate in the Institute for Computational and Mathematical Engineering at Stanford. His research--under Prof. Carlos Bustamante, chair of the department of biomedical data science at Stanford Medical School--focuses on applying machine learning techniques to medicine and human genetics. Prior to Stanford he earned his bachelors in Chemistry and Physics from Harvard and a MPhil from the University of Cambridge. He worked for several years on superconducting and quantum computing architectures at Northrop Grumman's Advanced Technologies research center in Linthicum, MD. In his free time he enjoys sailing. 

Gabriel Maher

Gabriel Maher is a PhD student at the Institute for Computational and Mathematical Engineering at Stanford University. For his research Gabriel is applying Deep Learning to cardiovascular medical image analysis with Dr. Alison Marsden at the Cardiovascular Biomechanics Computation Lab.