Skip to main content Skip to secondary navigation

The COVID-19 Health Alerts Website Provides the Stanford community with important updates related to the COVID-19 pandemic.

Visit Stanford COVID-19 Health Alerts Website
Main content start

How Nonprofits are Leveraging AI

This week’s seminar highlighted the ways in which three nonprofits are integrating artificial intelligence into their platforms for the first time.
AI for Nonprofits Panel
AI for Nonprofits Panel

This week’s seminar highlighted the ways in which three nonprofits are integrating artificial intelligence into their platforms for the first time.

By Izzy Pirimai Aguiar

When Google.org put out a call seeking organizations that are applying AI to address societal challenges, they received over 2000 applications spanning 119 countries. From these applications they chose 20 grantees whose projects range from optimizing emergency response times at the New York City Fire Department, to AI models for pest management in farms. At this week’s AI for Good seminar, we had the unique opportunity to hear from three of the recipients of the Google AI Impact Challenge grant about the benefits, challenges, and aspirations of using AI in their organizations for the first time.

Mollie Javerbaum from Google.org moderated the discussion with Nick Hobbs from The Trevor Project, Grace Mitchell from WattTime, and Heejae Lim from Talking Points, three recipients of Google’s grant who represent a diverse set of perspectives and topical interests.

The Trevor Project is the world’s largest suicide prevention and crisis intervention service for young LGBTQ+ people. For young people in a crisis, feeling suicidal, or just needing to talk, the Trevor Project provides access to trained counselors through their crisis hotline, TrevorLifeLine. As the project has reached more people, Hobbs noted, they have uncovered a scaling problem. As it was previously structured, counselors reached people on a standard, “first in, first out” queue. However, as more and more youth seek help through the TrevorLifeLine, this approach sometimes results in high-risk callers waiting for too long. It was in search for a solution to this problem that the Trevor Project applied for this grant.

Given the range of callers’ needs, Hobbs and his team want to use natural language processing (NLP) to determine how at-risk a caller is of attempting suicide. Although they have tried assessing risk through responses to a sequence of various prompts, Hobbs observed that the most reliable way to assess risk of suicide is by asking, “what’s going on?” and listening to the response. “The goal is to get them to a good spot,” said Hobbs. By leveraging NLP tools into their hotline queue, Hobbs hopes to be able to assess responses to this question in order to connect those most at-risk with counselors as quickly as possible.

WattTime is an environmental nonprofit whose goal is to give people the power to choose clean energy. As written on the organization’s web page, “We have options in nearly everything we do and buy: where to live, what color to paint the bedroom, what car to drive, what phone to buy and what apps to fill it with, which foods we eat. Yet we’ve had little to no choice in the electricity we consume...” WattTime’s technology empowers their users to make these decisions through data analysis and automated algorithms.

WattTime uses the Environmental Protection Agency’s (EPA) open-access data set that contains hourly emissions data. “This project is giving us the first step of how we enable automated emissions reduction at a high quality around the world.” With Google’s grant, Mitchell and her team aim to use AI to find a correlation between satellite imagery (which is global and freely available) and emissions data. “We are in a scenario where we have this great data set that we want to have everywhere” Although the United States is able to provide this data, it is the result of inaccessibly expensive monitoring--other countries don’t necessarily have the resources to provide this data.  Mitchell sees WattTime as key in empowering these countries with technology that can help them cost-effectively reduce their emissions.

Lim (Stanford GSB, ’15) founded Talking Points upon the realization that children spend 85% of their days outside of school. This means that families and parents play a critical role in supporting their childrens’ education. For underserved communities, however, families have fewer means by which to engage; there are language barriers, multiple shifts at work, and insecurities around their own level of education. Lim founded Talking Points as a platform to remove these obstacles to involved parenting, providing translation resources and coaching for parents and teachers. “We use technology to really empower and build relationships that empower that kind of love and human connection.”

As of a week ago, Talking Points had facilitated 20 million conversations between parents and teachers (60% of which were in a language other than English). As Talking Points facilitates more and more conversations, they are collecting data on home environments and the unique and varied ways in which students struggle. Lim wants to use AI to leverage this wealth of data to push forward their mission by recommending personalized coaching for parents and teachers.

In reflecting on Professor Margot Gerritsen’s comment made in last week’s seminar, that AI solutions can be ambiguous and uninterpretable, all three grantees communicated their awareness of the limitations of utilising AI for their applications. Hobbs, Mitchell, and Lim are addressing this issue by building trust with their users through providing accurate and concrete results, by communicating the limitations of the solutions, and by including users in discussions about the purposes of new technologies. Above all, these three prioritize their user in all their decisions. As Lim said, “Focus on the problem and defining the problem. We solve for a problem and... that solution happens to be technology based, and not the other way around. If that technology is not AI… our users do not care… they care that the product you give them has an impact in their lives everyday.”