Our Role in Ensuring AI is a Force for Good
AI for Good series kicks off with three unique perspectives on the ethical considerations and implications of artificial intelligence.
By Izzy Pirimai Aguiar
The kickoff session of the Institute for Computational and Mathematical Engineering (ICME)’s seminar series, AI for Good, walked the balance of being challenging yet accessible, focused yet interdisciplinary, controversial yet fair, grounded yet thought-provoking.
The opening seminar of the series, Is AI a Force for Good? was a panel conversation moderated by Scott Penberthy, the head of Applied AI at Google. The panel consisted of Rob Reich, Professor of Political Science and Director of the McCoy Center for Ethics in Society, Margot Gerritsen, Professor of Energy Resources Engineering and Senior Associate Dean of the School of Earth, and Sharad Goel, Founder & Executive Director of the Stanford Computational Policy Lab and Assistant Professor in Management Science & Engineering. Each of the three panelists brought to the discussion a unique perspective on the implications and responsibilities surrounding the ethics of Artificial Intelligence.
Reich’s training as a Philosopher influenced his thoughtful, eloquent, and philosophically challenging remarks surrounding the issue. Reich described how the ethical and moral problems presented by Artificial Intelligence are those which travel beyond personal ethics (“how do I be a good person?”) and into professional and social ethics. Drawing upon the equivalent standards in medicine and biological sciences (e.g. the Hippocratic Oath, Institutional Review Board), Reich provoked the necessity of professional standards in the AI community. He questioned how in political and social decisions regarding AI one will referee problems surrounding privacy, targeting, and other ethical considerations. While he noted that there is not one correct answer, he cautioned the urgency of addressing these questions.
Reich added that AI is a tool used to make processes more efficient, more optimal, and that we need to question if the ends to which we are applying our tools are independently worthwhile. “We are relying on the AI scientist or the company using AI alone to decide what is good and what is not good, and we need to be able to do it ourselves.”
Gerritsen in turn brought ethical considerations and perspectives informed by her experiences working in the engineering and data science fields. Throughout her career Gerritsen has always been interested in the robustness, accuracy, and trustworthiness of the models and methods she’s used, and it’s these considerations which she sees lacking from the wide implementations of AI. When AI started gaining traction, it promised the ability to do what experts had spent years developing and studying, but without knowing the underlying physics, science, or models. Gerritsen saw a very rapid adoption of these techniques, and without the understanding of the liability or trustworthiness of the models they produced. “We are in a period where AI is being used by everybody for everything, sometimes without understanding what the question is.” This, to Gerritsen, is an ethical dilemma, one that in practice can be addressed by teaching people what AI can and cannot do, what can and cannot be understood, and by developing tools to more robustly understand solutions.
In the world today, so many decisions are based on AI or other algorithms, and it’s critical for researchers and implementers to not only understand the algorithm they’re using, but the subject to which it’s applied, as well. To aspiring engineers in the audience, Gerritsen added that given the wealth of resources and class options at Stanford, “there is no excuse” not to pursue a holistic, liberal arts education. These different perspectives and ways of thought will help future engineers make ethical decisions regarding AI.
Perfectly bridging the perspectives of Reich and Gerritsen, Goel’s experience is in designing and deploying algorithms to aid public policy decisions. Goel described his current project being implemented in the District Attorney’s office in San Francisco which makes prosecution decisions less impacted by implicit bias. Goel has built a platform which masks racial words and racial proxies which might influence an implicitly biased prosecution decision from the police report. Whereas this project is one which is inherently socially responsible, Goel added that it is also technically interesting and challenging. Goel’s research group is also working on designing platforms which facilitate a door-to-door rideshare service for people who are likely to miss their trial date due to transportation issues. “These could change the lives of tens of thousands of people who are affected by our byzantine criminal justice system.”
When discussing the potential for AI, Goel noted that “being humble is very, very important in this area, but very hard to do.” Instead of understanding the technical material and thinking that they can do anything, AI scientists must remain humble, ask questions, gain additional perspectives. AI scientists, now, are among a small population of the world who can understand and implement this technical material.
The panel this week left the audience excited and thoughtful about the wide range of ethical considerations surrounding artificial intelligence. Regarding the term, AI for good, Goel cautioned that these words tend to exclude those not interested or fluent in AI, and that we need to inclusively bring a range of expertise into the conversation. As we prepare for a new decade, a new year, a new quarter, and next week’s session of this series, let us remember that this conversation is not only about how AI can be a force for good, but about how we can all do good, together.