Skip to main content Skip to secondary navigation
Main content start

AI for Human Rights

Dr. Megan Price, the Executive Director of the Human Rights Data Analysis Group (HRDAG), spoke at the Stanford ICME AI for Good seminar on February 24, 2020.
Eleanor Roosevelt holding the Universal Declaration of Human Rights
Eleanor Roosevelt holding the Universal Declaration of Human Rights

AI for Human Rights

By Izzy Pirimai Aguiar

Dr. Megan Price, the Executive Director of the Human Rights Data Analysis Group (HRDAG), spoke at the Stanford ICME AI for Good seminar on February 24. HRDAG is a nonprofit organization based in San Francisco that partners with other international groups. In their partnerships they combine their technical expertise with the partner’s contextual experience to help answer and quantitatively support substantive questions. Through these collaborations, HRDAG has strengthened the human rights advocacy effort through quantitative and statistical reinforcement of human rights violation accusations.

The Universal Declaration of Human Rights (UDHR), ratified in 1948 by the United Nations and shown in the photo above, contains 30 articles defining the rights any individual has, regardless of nationality, citizenship, or where they live. HRDAG focuses primarily on the right to life, liberty, and to not be tortured or arbitrarily detained, although, as Price noted, the UDHR covers a much wider range of rights, ranging from privacy to equal representation under the law. In advocating for these human rights, Price affirmed the reminder that “the role data science has to play in a lot of these efforts is as a footnote, as a technical appendix… it doesn’t necessarily need to be front and center, but it needs to be right.”

Price described the touchstone of her organization as being a tension between how truth is simultaneously discovered and obscured. HRDAG is at the intersection of this tension; they are consistently participating in science’s progressive uncovering of what is true, but they are accustomed to working in spaces where this truth is denied. Of the many responsibilities HRDAG holds in its work is that of “speaking truth to power,” said Price, “and if that’s what you’re doing, you have to know that your truth stands up to adversarial environments.”

On the technical list of worries for HRDAG is that of biased and missing data, and Price mentioned the gratitude she feels that this is a topic more widely talked about and addressed in the data science and AI fields. In human rights work, such as analyzing numbers and cases of political prisoners, missing or killed people in zones of conflict, and other victims of HR violations, “we hardly ever have representative data or complete data, it’s all what people were able to document during one of the worst periods of their lives.” Price shared an example of this, from a partnership between HRDAG with the United Nations wherein they used Multiple Systems Estimation to evaluate five incomplete databases to estimate the number of victims killed in Syria. “Our job as statisticians and data scientists is to analyze that data appropriately.”

Price presented on many additional human rights advocacy efforts that HRDAG has worked on in the past 25 years, ranging from locating hidden graves in Mexico, to providing expert testimony in criminal trials of human rights violations, to evaluating the systemic racism propagated and reinforced through popular predictive policing algorithms. In all of these examples, as well as in all current and possible applications for AI, Price made clear that our priorities must be shifted in order to do good. “We believe that we have that obligation to pay due respect to the victims and the witnesses who are sharing their story with us.”

Expressing worry in the influence of for-profit vendors (like the one providing predictive policing services to police departments across the country) and in the practice of using AI just because we can, Price reminded the audience that specific guiding questions should and must be asked. Does implementing this algorithm improve the status quo? If it doesn’t, who bears the cost of the algorithm’s mistakes? In the predictive policing example, the cost is borne by community members, marginalized and vulnerable populations who are being over policed--not an improvement on the status quo but rather a reinforcement of the downfalls of our system.

Price reiterated these questions in the context of varied examples wherein AI is being implemented, and these questions formed into somewhat of a mantra for the rest of the talk, and throughout the Q&A: Does it improve the status quo? Who bears the cost of the mistakes? In the wake of our discussions from previous weeks of AI for Good, this mantra served as a reminder for how to double check decisions made about where and how we implement AI.