Skip to main content Skip to secondary navigation
Main content start

From Adjudication to Enforcement, AI in Government Agencies

AI in Government Agencies

By Izzy Pirimai Aguiar

When considering the uses of artificial intelligence in the US government, we think of facial recognition technology to identify those at crime scenes, criminal risk assessment algorithms that determine bail and parole decisions, or predictive policing algorithms used to pinpoint neighbourhoods where future crimes are likely to take place. However, when AI use in the government has strayed beyond criminal justice, its details and implementations have been largely abstracted.

To address this abstraction, Dan Ho and David Freeman Engstrom, both professors in the Stanford Law School, along with colleagues Catherine M. Sharkey from New York University and Mariano-Florentino Cuéllar, a professor at Stanford Law School and a justice on the Supreme Court of California, formed a team in an effort to uncover and identify the various ways in which the government is currently using AI. These four, with a team of 15 law students and 10 engineering students, created the report, Government by Algorithm: Artificial Intelligence in Federal Administrative Agencies, which will soon be submitted to the Administrative Conference of the United States.  At this week’s session of the ICME AI for Good Seminar Series, Ho and Engstrom presented a preview of their report, along with questions and challenges for the audience.

The session began with an overview of the landscape of AI in government. Of the 120 governmental agencies surveyed, it was found that 45% of them have experimented with, or are currently using, AI governance tools of some type. The team found about 160 specific instances of its usage, spanning from policy to education. Examples of such implementations range from chat-bots in the Department of Housing and Urban Development, to the United States Postal Service experimenting with the use of Autonomous Vehicles for mail delivery. Engstrom and Ho focused on two cases that they saw as more consequential in their conversation: agency adjudication of rights and benefits, and the ways in which enforcement agencies are using AI to engage in predictive enforcement targeting. “We can see in [these examples] AI moving to the heart of the redistributive and coercive power of the state.”

Adjudication, formally, is the act of resolving a dispute or deciding a case in a judicial proceeding. Examples of adjudication include immigration judges making calls on political refugees seeking asylum, and the Veterans Association determining claims for disability compensation. As Ho commented, administrative judges adjudicate more cases than all the federal courts combined and “for decades, one of the biggest challenges in this system has been how to ensure the accuracy and consistency of these decisions.” Furthermore, adjudication involves human beings and lives on the other side of case files who are impacted by decisions, so timeliness in the processes is a priority. Thus various government agencies are using AI to help streamline their practices and processes, flag inconsistencies in decisions, and make more consistent and accurate decisions on appeals.

When using AI for enforcement, implementations reach beyond policing and into prosecution of trademark infringement, tax evasion, and violation of environmental protection laws. Current instances where AI is being used include predicting corrupt investment brokers, analyzing the similarity of images in new and existing company logos, and identifying agricultural locations that over-pollute.

Within these applications, Engstrom notes, “The challenge isn’t just good data, but the dynamic nature of wrongdoing.” As regulations adapt, so do the techniques people use to get around them. Thus the constant iterative updating of models in these applications is important to keep them up-to-date. Furthermore, such algorithms present the potential of enhancing bias in policing, severely and significantly impacting historically over-policed groups. Ensuring that such predictive algorithms don’t generate feedback loops is not only a legal and technical challenge, but a humanistic and moral necessity.

As with many of the instances we’ve seen in this seminar series, the applications of AI in government present novel technical challenges and future research areas, as well. For instance, implementations which depend on Natural Language Processing (NLP) require that dense legal texts be labeled, a process that requires heavy legal expertise. Furthermore, “the law that actually governs how agencies do their work is based on transparency and reason giving. When government takes actions that affect our lives, it’s supposed to explain why… But many of the more sophisticated AI tools… are by their structure not fully explainable. The result is that we have this basic collision.” Thus AI in government applications inherently demands interpretability of algorithms more so than other disciplines or applications.

In addition to, and more philosophical than, these technical challenges, are constitutional implications. By automating processes that streamline cases and enable decisions to be made without holding hearings Ho notes that, “There’s a kind of lost strand of due process that it’s not just about due process but there’s actually a dignitary value to holding hearings, so the tool that allows you to skip them may be the wrong one.” Individuals filing claims might not care as much about the outcome as they do that they’ll simply be heard, that they’ll be able to engage with the judicial system in a meaningful way. This meaningful engagement is at risk of being lost if we choose to rely more and more on AI for adjudication and enforcement.

“There are lots of rich and important questions about how to arm the government with what it needs to develop effective and fair AI governance tools… and how to build a sensible accountability structure around use of those tools,” Engstrom commented. Indeed, beyond the technical and constitutional challenges of AI in government, are the actual implementation challenges. As stated in the report, “Managed poorly, government deployment of AI tools can hollow out the human expertise inside agencies with few compensating gains, widen the public-private technology gap, increase undesirable opacity in public decision-making, and heighten concerns about arbitrary government action and power.” As AI is leveraged more and more within government agencies, priorities will be to develop tools internally as opposed to through contractors and to understand the context-specific necessity of transparency, while consistently upholding constitutional and moral responsibilities to the citizens impacted by their use.

Indeed, further research in the field of AI for government will require interdisciplinary collaboration and in-house expertise within government agencies. “Their [government agencies] problem is not in finding somebody who can code up a hyperparameter search using scikit learn... the problem is having someone who both knows what tools are available... and can learn enough about the institution to know what problems are worth solving.” In order for AI to be effectively and responsively leveraged within the government, a bridge must be built between academia and Washington, D.C.. Technological advances must reach the public sector instead of solely being funnelled directly from universities into private industry and Silicon Valley. At the epicenter of innovation and interdisciplinary research, Stanford holds a social and moral responsibility to be more engaged in shrinking this massive divide.