World

How the Federal Government Can Rein In A.I. in Law Enforcement

One of the most hopeful proposals involving police surveillance emerged recently from a surprising quarter — the federal Office of Management and Budget. The office, which oversees the execution of the president’s policies, has recommended sorely needed constraints on the use of artificial intelligence by federal agencies, including law enforcement.

The office’s work is commendable, but shortcomings in its proposed guidance to agencies could still leave people vulnerable to harm. Foremost among them is a provision that would allow senior officials to seek waivers by arguing that the constraints would hinder law enforcement. Those law enforcement agencies should instead be required to provide verifiable evidence that A.I. tools they or their vendors use will not cause harm, worsen discrimination or violate people’s rights.

As scholars of algorithmic tools, policing and constitutional law, we have witnessed the predictable and preventable harms from law enforcement’s use of emerging technologies. These include false arrests and police seizures, including a family held at gunpoint, after people were wrongly accused of crimes because of the irresponsible use of A.I.-driven technologies including facial recognition and automated license plate readers.

Consider the cases of Porcha Woodruff, Michael Oliver and Robert Julian-Borchak Williams. All were arrested between 2019 and 2023 after they were misidentified by facial recognition technology. These arrests had indelible ­­­consequences: Ms. Woodruff was eight months pregnant when she was falsely accused of carjacking and robbery; Mr. Williams was arrested in front of his wife and two young daughters as he pulled into his driveway from work. Mr. Oliver lost his job as a result.

All are Black. This should not be a surprise. A 2018 study co-written by one of us (Dr. Buolamwini) found that three commercial facial-analysis programs from major technology companies showed both skin-type and gender biases. The darker the skin, the more often the errors arose. Questions of fairness and bias persist about the use of these sorts of technologies.

Errors happen because law enforcement deploys emerging technologies without transparency or community agreement that they should be used at all, with little or no consideration of the consequences, insufficient training and inadequate guardrails. Often the data sets that drive the technologies are infected with errors and racial bias. Typically, the officers or agencies face no consequences for false arrests, increasing the likelihood they will continue.

The Office of Management and Budget guidance, which is now being finalized after a period of public comment, would apply to law enforcement technologies such as facial recognition, license-plate readers, predictive policing tools, gunshot detection, social media monitoring and more. It sets out criteria for A.I. technologies that, without safeguards, could put people’s safety or well-being at risk or violate their rights. If these proposed “minimum practices” are not met, technologies that fall short would be prohibited after next Aug. 1.

Here are highlights of the proposal: Agencies must be transparent and provide a public inventory of cases in which A.I. was used. The cost and benefit of these technologies must be assessed, a consideration that has been altogether absent. Even if the technology provides real benefits, the risks to individuals — especially in marginalized communities — must be identified and reduced. If the risks are too high, the technology may not be used. The impact of A.I.-driven technologies must be tested in the real world, and be continually monitored. Agencies would have to solicit public comment before using the technologies, including from the affected communities.

The proposed requirements are serious ones. They should have been in place before law enforcement began using these emerging technologies. Given the rapid adoption of these tools, without evidence of equity or efficacy and with insufficient attention to preventing mistakes, we fully anticipate some A.I. technologies will not meet the proposed standards and their use will be banned for noncompliance.

The overall thrust of the federal A.I. initiative is to push for rapid use of untested technologies by law enforcement, an approach that too often fails and causes harm. For that reason, the Office of Management and Budget must play a serious oversight role.

Far and away the most worrisome element in the proposal are provisions that create the opportunity for loopholes. For example, the chief A.I. officer of each federal agency could waive proposed protections with nothing more than a justification sent to O.M.B. Worse yet, the justification need only claim “an unacceptable impediment to critical agency operations” — the sort of claim law enforcement regularly makes to avoid regulation.

This waiver provision has the potential to wipe away all that the proposal promises. No waiver should be permitted without clear proof that it is essential — proof that in our experience law enforcement typically cannot muster. No one person should have the power to issue such a waiver. There must be careful review to ensure waivers are legitimate. Unless the recommendations are enforced strictly, we will see more surveillance, more people forced into unjustified encounters with law enforcement, and more harm to communities of color. Technologies that are clearly shown to be discriminatory should not be used.

There is also a vague exception for “national security,” a phrase frequently used to excuse policing from legal protections for civil rights and against discrimination. “National security” requires a sharper definition to prevent the exemption from being invoked without valid cause or oversight.

Finally, nothing in this proposal applies beyond federal government agencies. The F.B.I., the Transportation Security Administration and other federal agencies are aggressively embracing facial recognition and other biometric technologies that can recognize individuals by their unique physical characteristics. But so are state and local agencies, which do not fall under these guidelines. The federal government regularly offers federal funding as a carrot to win compliance from state and local agencies with federal rules. It should do the same here.

We hope the Office of Management and Budget will set a higher standard at the federal level for law enforcement’s use of emerging technologies, a standard that state and local governments should also follow. It would be a shame to make the progress envisioned in this proposal and have it undermined by backdoor exceptions.

Joy Buolamwini is the founder of the Algorithmic Justice League, which seeks to raises awareness about the potential harms of artificial intelligence, and the author of “Unmasking AI: My Mission to Protect What Is Human in a World of Machines.” Barry Friedman is a professor at New York University’s School of Law and faculty director of its Policing Project. He is the author of “Unwarranted: Policing Without Permission.”

The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: [email protected].

Follow the New York Times Opinion section on Facebook, Instagram, TikTok, X and Threads.

Back to top button