How can AI address climate change? alleviate poverty? predict natural disasters? cure diseases?
How can AI be invulnerable to bias? respect ethical boundaries? protect privacy?
The AI for Good Seminar Series (CME 500) explores ways artificial intelligence can benefit society and our planet. In weekly talks, leaders from academia, industry, and NGOs, who are at the forefront of using AI for social good, showcase AI applications that are forging positive changes in healthcare, the environment, education, technology, government and more. Speakers discuss how challenges regarding fairness, bias, privacy, ethics, etc. are beginning to be addressed.
Interested students will have access to supplemental materials such as mini case studies and Jupyter notebooks. Students of all academic backgrounds and interests can register for this 1-unit credit/no-credit course (CME 500). No prerequisites. Space permitting, this series is open to Stanford faculty, staff, and ICME partners. Students may register via Axess.
Opening Session: Is AI a Force for Good?
Read the recap article or watch the session below.
Our first session will lay the framework for our entire series by considering the central question of how AI can be a force for good. As we explore how artificial intelligence can address some of the world’s most vexing issues, we will pair the positive outcomes with crucial awareness of potential unintended consequences. Our panel will discuss considerations to be made to ensure this new era will help humanity and not harm it.
Rob Reich - Professor of Political Science with courtesy appointments in Philosophy and at the Graduate School of Education, Faculty Director of the Center for Ethics in Society, Faculty Co-Director of the Stanford Center on Philanthropy and Civil Society, and Faculty Associate Director of the Stanford Institute for Human-Centered AI.
Margot Gerritsen - Senior Associate Dean for Educational Affairs, Professor of Energy Resources Engineering, Senior Fellow at the Precourt Institute for Energy and Professor, by courtesy, of Civil and Environmental Engineering.
Sharad Goel - Founder & Executive Director, Stanford Computational Policy Lab and Assistant Professor, Stanford Management Science & Engineering with courtesy appointments in Computer Science, Sociology, and Stanford School of Law.
Scott Penberthy, Head of Applied AI, Google
Scott is a member of Google Cloud’s CTO office, a team of industry ex-CTOs who co-innovate with top customers and product teams. Scott utilizes machine learning (ML) as a tool for science and discovery. He enjoys predicting customer behavior and creating jaw-dropping experiences with voice, video and imagery. Lately Scott’s been an advisor and confidant to CEOs, sharing insights from a career delivering large scale systems. Previously, Scott landed public cloud at PwC for 200k employees in 2014, moved a video site for 5m users to AWS in 2008, sold a social photo site with 50M users in 2007, built mobile phone “widgets” in 2005, and launched a $13B web middleware and $4B web hosting business in the 90s. He’s an avid programmer, triathlete, space fan, guitarist, chef and father of two amazing daughters. Scott holds a PhD in AI with multiple degrees from MIT and the University of Washington. Follow @scottpenberthy on Twitter.
The Nonprofit Sector | Google AI Impact Challenge Grantees
See the recap article and watch the session below.
Breakthroughs in technology often have humble origins. Through it's Google AI Impact Challenge grant program, Google.org lends a helping hand to nonprofit innovators and social entrepreneurs who are using the power of AI to address social and environmental challenges. This session will feature a panel of Google.org Impact Challenge Grantees who are using AI and machine learning to tackle issues affecting the environment, educational equity, at-risk youth, and mental health.
- Heejae Lim, Founder and CEO, TalkingPoints - TalkingPoints drives student success in low-income, diverse areas through AI-enabled two-way translated communication and personalized coaching content that guides parents' engagement with teachers and at home with their children, thereby building strong partnerships across families, schools, and communities.
- Grace Mitchell, Data Analyst, WattTime - WattTime is a nonprofit that offers technology solutions that make it easy for anyone to achieve emissions reductions without compromising cost, comfort, and function.
- Nick Hobbs, Senior Data Scientist, The Trevor Project.org The Trevor Project saves lives by supporting at-risk LGBTQ youth via phone, text, and chat. Using natural language processing and sentiment analysis, counselors will be able to determine a LGBTQ youth’s suicide risk level, and better tailor services for individuals seeking help.
Mollie Javerbaum, Google.org - The Google AI Impact Challenge was an open call to organizations to submit their ideas on how AI could help address societal challenges. Out of more than 2,600 proposals from 119 countries, Google selected 20 organizations to support with a total of $25M in grant funding from Google.org, coaching from Google’s AI experts, credit and consulting from Google Cloud, and inclusion in a custom accelerator program.
AI for Earth and the Environment
How can AI and machine learning be leveraged to mitigate the impact of human activities on earth’s natural systems? Learn about data science tools and strategies being used to safeguard our water supply, feed the worldwide human population, and promote greater biodiversity and global sustainability. Join Lucas Joppa and Stefano Ermon in a conversation with Gretchen C. Daily, Director, Center for Conservation Biology; Faculty Director, Natural Capital Project; Bing Professor of Environmental Science, Stanford Department of Biology; and Senior Fellow, Woods Institute for the Environment.
Lucas Joppa, Chief Environmental Officer, Microsoft
As Microsoft’s first Chief Environmental Officer, Dr. Lucas Joppa works to advance the company’s core commitment to sustainability through technology innovation, program development, policy advancement, and global operational excellence. With a background in both environmental science and data science, Lucas is committed to using the power of advanced technology to help transform how society monitors, models, and ultimately manages Earth’s natural resources. Dr. Joppa founded Microsoft’s AI for Earth program in 2017—a five-year, $50 million cross-company effort dedicated to delivering technology-enabled solutions to global environmental challenges.
Previously, Lucas was Microsoft’s Chief Environmental Scientist and led research programs in Microsoft Research. He remains an active scientist and one of Microsoft’s foremost AI thought leaders, speaking frequently on issues related to Artificial Intelligence, environmental science, and sustainability. With extensive publication in leading academic journals, such as Science and Nature, Dr. Joppa is a uniquely accredited voice for sustainability in the tech industry.
He holds a PhD in Ecology from Duke University , a BS in Wildlife Ecology from the University of Wisconsin, and is a former Peace Corps volunteer to Malawi.
Stefano Ermon, Assistant Professor of Computer Science, Stanford University
Dr. Ermon is an Assistant Professor in the Department of Computer Science at Stanford University, where he is affiliated with the Artificial Intelligence Laboratory and is a fellow of the Woods Institute for the Environment. His research is centered on techniques for scalable and accurate inference in graphical models, statistical modeling of data, large-scale combinatorial optimization, and robust decision making under uncertainty, and is motivated by a range of applications, in particular ones in the emerging field of computational sustainability. Dr. Ermon received his PhD from Cornell University.
Gretchen C. Daily - Professor of Environmental Science and Director of the Center for Conservation Biology at Stanford.
AI for Government
AI promises to transform how government agencies work. Where will it have the biggest impact? What are some challenges around transparency, privacy, bias, and accountability? This talk will go beyond the headlines and share highlights of a just-completed report on AI in the US Government.
David Freeman Engstrom - Professor and Associate Dean for Strategic Initiatives, Stanford Law School
David Freeman Engstrom is the Bernard D. Bergreen Faculty Scholar and an Associate Dean at Stanford Law School. He is an elected member of the American Law Institute and a faculty affiliate at the Stanford Institute for Human-Centered AI, CodeX: The Stanford Center for Legal Informatics, and the Regulation, Evaluation, and Governance Lab (RegLab). He received a J.D. from Stanford Law School, an M.Sc. from Oxford University, and a Ph.D. in political science from Yale University and clerked for Chief Judge Diane P. Wood on the U.S. Court of Appeals for the Seventh Circuit. Before joining Stanford's faculty, he practiced law, representing clients before the U.S. Supreme Court and other courts and agencies.
Daniel Ho - Professor of Law, Professor of Political Science, Director of the Regulation, Evaluation, and Governance Lab (RegLab) at Stanford University
Daniel Ho is the William Benjamin Scott and Luna M. Scott Professor of Law, Professor of Political Science, and Senior Fellow at the Stanford Institute for Economic Policy Research at Stanford University. Dr. Ho received his J.D. from Yale Law School and Ph.D. from Harvard University and clerked for Judge Stephen F. Williams on the U.S. Court of Appeals, District of Columbia Circuit. He directs the Regulation, Evaluation, and Governance Lab (RegLab) at Stanford, is a Faculty Fellow at the Center for Advanced Study in the Behavioral Sciences, and is an Associate Director of the Stanford Institute for Human-Centered Artificial Intelligence (HAI).
AI for Healthcare
Marzyeh Ghassemi - Assistant Professor, Faculties of Computer Science & Medicine, University of Toronto and Vector Institute faculty member holding a Canadian CIFAR AI Chair and Canada Research Chair.
You can watch the session below or read the article.
Abstract: Improving health requires targeting and evidence. Marzyeh tackles part of this puzzle with machine learning. This session will cover some of the novel technical opportunities for machine learning in health challenges and the important progress to be made with careful application to domain. She will also walk through the danger of applying methods without a robust understanding of the domain, and potential downstream uses.
Bio: Professor Ghassemi has a well-established academic track record in personal research contributions across computer science and clinical venues, including KDD, AAAI, MLHC, JAMIA, JMIR, JMLR, Nature Translational Psychiatry, and Critical Care. She is an active member of the scientific community, on the Board of Women in Machine Learning (WiML), and co-organized the past three NIPS Workshop on Machine Learning for Health (ML4H). She served as a NeurIPS 2019 Workshop Co-Chair, and Board Member of the Machine Learning for Health Unconference. Previously, she was a Visiting Researcher with Alphabet's Verily and a post-doc with Dr. Peter Szolovits at MIT (CV). Marzyeh targets “Healthy ML”, focusing on applying machine learning to understand and improve health.
Professor Ghassemi completed her PhD at MIT where her research focused on machine learning in health care. Prior to MIT, she received a Master’s degree in biomedical engineering from Oxford University as a Marshall Scholar and B.S. degrees in computer science and electrical engineering as a Goldwater Scholar at New Mexico State University.
AI for Human Rights
Megan Price - Executive Director of the Human Rights Data Analysis Group
Abstract: As a team of scientists working as statisticians for human rights, the Human Rights Data Analysis Group (HRDAG) partners with human rights advocacy organizations to identify questions that can be answered and arguments that can be strengthened using data science. Dr. Price’s talk will highlight how data science and AI methods and tools are being used to tell stories, build cases, and answer important questions about the human toll of conflicts in Syria, Mexico, and Guatemala. She will also address the potential harm that can be done when relying on incomplete and imperfect data in domestic situations such as predictive policing of drug use in Oakland.
Bio: As the Executive Director of the Human Rights Data Analysis Group, Megan Price designs strategies and methods for statistical analysis of human rights data for projects in a variety of locations including Guatemala, Colombia, and Syria. Her work in Guatemala includes serving as the lead statistician on a project in which she analyzes documents from the National Police Archive; she has also contributed analyses submitted as evidence in two court cases in Guatemala. Her work in Syria includes serving as the lead statistician and author on three reports, commissioned by the Office of the United Nations High Commissioner of Human Rights (OHCHR), on documented deaths in that country.
Megan is a member of the Technical Advisory Board for the Office of the Prosecutor at the International Criminal Court, on the Board of Directors for Tor, and a Research Fellow at the Carnegie Mellon University Center for Human Rights Science. She is the Human Rights Editor for the Statistical Journal of the International Association for Official Statistics (IAOS) and on the editorial board of Significance Magazine. She earned her doctorate in biostatistics and a Certificate in Human Rights from the Rollins School of Public Health at Emory University. She also holds a master of science degree and bachelor of science degree in Statistics from Case Western Reserve University.
The Future of AI
News Coverage in the Stanford Daily. Watch the session below.
What Comes Next? Beyond today’s plans for driverless cars and workerless factories, what will the future of AI really look like? Which sectors and nations will be affected the most? What core tenets should be set to ensure AI works for all of humanity? This session will be a conversation about the longer term impact of the AI era and an inside look at the new Stanford Institute for Human-Centered AI (HAI), with John Etchemendy, Co-Director of HAI and Stanford's Provost Emeritus and Professor Russ Altman, the host of the Stanford School of Engineering "The Future of Everything" podcast.
John W. Etchemendy, Provost Emeritus, and Patrick Suppes Family Professor in the School of Humanities and Sciences, and Co-Director, Human-Centered Artificial Intelligence Initiative, Stanford University
John Etchemendy received his BA and MA in Philosophy from the University of Nevada, Reno in 1973 and 1976 respectively. He earned his doctorate in Philosophy at Stanford University in 1982. After two years on the faculty at Princeton University, he returned to the Department of Philosophy at Stanford in 1983 and has been a faculty member at Stanford since that time. He has won numerous awards for teaching excellence and leadership both at Stanford and nationally.
Professor Etchemendy is a founding faculty member of the Symbolic Systems Program and a senior researcher at the Center for the Study of Language and Information (CSLI), after having served as Director of CSLI from 1990 to 1993. He is the author or co-author of seven books and numerous articles in logic. He has been co-editor of the Journal of Symbolic Logic and on the editorial board of several other journals.
In 2012, Professor Etchemendy was elected as a Commissioner for the Accreditation Commission of the Western Association of Schools and Colleges. In 2014 he was appointed by Education Secretary Arne Duncan to the National Advisory Committee on Institutional Quality and Integrity (NACIQI), which advises the Secretary on higher education accreditation.
Professor Etchemendy was Stanford’s Associate Dean for the Humanities from 1993 to 1997, and Provost from 2000 to 2017. He was the longest serving provost in Stanford history and one of the longest serving provosts at any U.S. institution. Since early 2019 he has served as the co-director of the Human-Centered Artificial Intelligence Initiative at Stanford University.
Russ Altman, Kenneth Fong Professor of Bioengineering, Genetics, Medicine, Biomedical Data Science and (by courtesy) Computer Science) and past chairman of the Bioengineering Department at Stanford University
Russ’ primary research interests are in the application of computing and informatics technologies to problems relevant to medicine. He is particularly interested in methods for understanding drug action at molecular, cellular, organism and population levels. His lab studies how human genetic variation impacts drug response. Other work focuses on the analysis of biological molecules to understand the actions, interactions and adverse events of drugs. He helps lead an FDA-supported Center of Excellence in Regulatory Science & Innovation.
AI for Everyone | A Multi-Disciplinary Approach
Due to coronavirus-related precautions being taken to limit travel and large gatherings, this talk has was canceled.
How do we ensure AI solutions are designed to work for all – regardless of race, gender, ability, or background? Within the promise of artificial intelligence lie a number of difficult questions and challenges. A multi-disciplinary approach, one that has people from a variety of backgrounds involved in designing the solutions, is needed. In their talks and joint Q&A, Timnit and Omer will address challenges around data collection and algorithm development regarding bias, fairness, accountability, differential privacy and ethics.
Timnit Gebru, Research Scientist and Technical Co-lead of Google’s Ethical Artificial Intelligence Team
Omer Reingold, The Rajeev Motwani Professor of Computer Science at Stanford University.
The Stanford Institute for Computational & Mathematical Engineering (ICME) graciously acknowledges that financial support for this series is provided by Google.
This series is presented in collaboration with the following Stanford co-sponsors:
Explore More Events
Monday, March 29, 2021 - Monday, May 24, 2021