Senators Reverend Warnock, Durbin Raise Concerns About Facial Recognition Software, Demand Better DOJ Oversight

Senator Reverend Warnock, Senate Judiciary Committee Chair Durbin, and 16 Senators raised concerns to the Department of Justice that funding frequently inaccurate facial recognition software could lead to violations of Title VI of the Civil Rights Act

The letter follows recent reporting about Georgia resident Randal Quran Reid, who was arrested while driving to his mother’s house for a crime committed in Louisiana, a state Reid has never visited

There is mounting evidence that facial recognition software is less accurate when analyzing faces of dark-skinned minorities

Senator Reverend Warnock has long championed efforts to combat mass incarceration

Senators Reverend Warnock, Durbin: “We are deeply concerned that facial recognition technology may reinforce racial bias in our criminal justice system and contribute to arrests based on faulty evidence… Should evidence demonstrate that errors systematically discriminate against communities of color, then funding these technologies could facilitate violations of federal civil rights laws”

ICYMI from POLITICO: Washington takes aim at facial recognition

Washington, D.C. – Today, U.S. Senator Reverend Raphael Warnock (D-GA), U.S. Senate Majority Whip Dick Durbin (D-IL), Chair of the Senate Judiciary Committee, and 16 other Senators raised concerns with the Department of Justice (DOJ) that funding facial recognition software, which can be inaccurate and unreliable, may lead to violations of Title VI of the Civil Rights Act, which prohibits “discrimination under any program or activity receiving Federal financial assistance.” The Senators also called for additional action and oversight from DOJ concerning these fallible technologies. The letter follows widespread reporting about Georgia resident, Randal Quran Reid, who was arrested in November 2022 while driving to his mother’s house for a crime committed in Louisiana, a state Reid has never visited. Reid’s case is not an anomaly: there are at least five other publicly known cases of Black people falsely arrested based on nothing more than a facial recognition software match.

“We are deeply concerned that facial recognition technology may reinforce racial bias in our criminal justice system and contribute to arrests based on faulty evidence,” wrote the Senators to the Department of Justice. “Errors in facial recognition technology can upend the lives of American citizens. Should evidence demonstrate that errors systematically discriminate against communities of color, then funding these technologies could facilitate violations of federal civil rights laws.”

Numerous academic and government studies establish that facial recognition technology is especially likely to misidentify not only Black faces, but Native American, and Asian faces as well. One study found that facial recognition software was up to twice as likely to find false positives for Black and Asian faces. 

In addition to Senator Warnock and Chair Durbin, the letter is signed by Cory Booker (D-NJ),  Laphonza Butler (D-CA), Ben Cardin (D-MD), John Fetterman (D-PA), Mark Kelly (D-AZ), Edward J. Markey (D-MA); Jeff Merkley (D-OR), Alex Padilla (D-CA), Gary Peters (D-MI), Bernard Sanders (I-VT), Brian Schatz (D-HI), Tina Smith (D-MN), Chris Van Hollen (D-MD), Elizabeth Warren (D-MA), Peter Welch (D-VT), Ron Wyden (D-OR).

“The explosion of artificial intelligence and algorithmic systems is raising crucial questions about how new technologies will exacerbate existing racial disparities,” said Cody Venzke, Senior Policy Counsel in the ACLU’s National Political Advocacy Department. “Those questions are even more pressing in law enforcement, where AI-driven facial recognition technology has increased error rates for Black, Native American, and Asian people and has led to multiple false arrests. Facial recognition technology threatens basic civil rights, and today’s letter asks crucial questions about the DOJ’s role in supporting and perpetuating its use.”

“Facial recognition technology is a racially biased tool with unchecked potential to limit civil rights and liberties,” said Brandon Tucker, Senior Director of Policy & Government Affairs at Color Of Change. “Its use increases the surveillance of Black people by corporate actors and law enforcement agencies. Strong anti-surveillance principles and anti-discrimination principles are vital to prevent improper data collection and use. We stand behind any and all efforts to better protect and strengthen the civil rights and liberties for all, particularly in Black communities.”

“As Mr. Reid’s horrific experience shows, facial recognition technology poses a grave threat, especially to persons of color who are 100 times more at risk for misidentification,” said Chris Bruce, Esq., Policy Director for the ACLU of Georgia. “We join the call to ensure this technology will not be used to discriminate against communities of color and safeguards are placed to protect the civil rights and liberties of all.”

“The widespread and ever-growing use of facial recognition technology poses profound risks to our civil rights and freedoms. Time and again, data has shown that these technologies perpetuate and reinforce existing discrimination, especially within our criminal legal system, resulting in the wrongful surveillance and arrests of Black and Brown people across the country,” said Quinn Anex-Ries, policy associate at the Lawyers’ Committee for Civil Rights Under Law.“The Lawyers’ Committee applauds Senators Raphael Warnock and Dick Durbin for leading this effort to ensure that facial recognition technology and other biometric tools are analyzed and audited for compliance with and adherence to our nation’s civil rights laws. While stronger protections are certainly needed to fully address the harmful effects of this technology, this effort is a critical step forward.”

A full copy of the letter can be found HERE and below:

Dear Attorney General Garland:

We write to request information about the U.S. Department of Justice’s (DOJ) funding and oversight of facial recognition tools and other biometric technologies under the Civil Rights Act of 1964 and other applicable federal statutes and regulations. 

In recent years, facial recognition and other biometric technologies have become widely used in law enforcement. However, these technologies can be unreliable and inaccurate, especially with respect to race and ethnicity. An April 6, 2023 report in the New York Times provided a particularly vivid example of the consequences of misidentification. In 2022, Randal Quran Reid, a Georgia resident, was arrested while driving to his mother’s home outside of Atlanta the day after Thanksgiving. He was accused of retail theft in Louisiana, though he said he had never been to the state. Law enforcement officials refused to explain why he had been targeted, and Mr. Reid was jailed for six days. His family had to spend thousands of dollars in legal fees to determine he had been falsely identified, free him from jail, and clear his name. Reporting confirmed that facial recognition technology was used to initially identify Mr. Reid.

In at least five other publicly known cases, Americans have been arrested based on little or nothing more than an incorrect facial recognition match. All six victims were Black people.

We are concerned that the use of certain forms of biometric technology, such as facial recognition technology, may potentially violate Title VI of the Civil Rights Act of 1964, which prohibits “discrimination under any program or activity receiving Federal financial assistance” based on “race, color, or national origin.” The law prohibits intentional discrimination as well as discriminatory effects. Title VI thus restricts the ability of grant recipients funded by agencies like DOJ to deploy programs or technologies that may result in discrimination. 

Numerous studies, including one co-authored by a Federal Bureau of Investigation (FBI) scientist and another by the National Institute of Standards and Technology (NIST), have established that facial recognition technology is less accurate when analyzing dark-skinned faces. Notably, the NIST study found that facial recognition technology is especially likely to misidentify not only Black faces, but Native American, and Asian faces as well. Specifically, the NIST study found higher rates of false positives for Asian and Black faces, up to a factor of 100, depending on the algorithm. It also found that “false positive rates are highest in West and East African and East Asian people [… and] are also elevated, but slightly less so, in South Asian and Central American people. The lowest false positive rates generally occur with East European individuals.”

There also appear to be serious disparities in who is subjected to facial recognition technology searches. In New Orleans, for example, police department data shows that “nearly every use of the technology from last October to this August was on a Black person.” And research indicates that police deployment of facial recognition technology “contributes to greater racial disparity in arrests.”

Facial recognition technology programs are also widely used by federal agencies. The FBI deploys facial recognition technology through its Facial Analysis, Comparison, and Evaluation Services (FACES) Unit, which has access to over 641 million photographs, including driver’s license photographs from more than twenty states. The FBI also every month processes thousands of facial recognition scans requested by tribal, state, and local law enforcement through the Next Generation Identification-Interstate Photo System (NGI-IPS). A 2020 Government Accountability Office (GAO) report on facial recognition technology (FRT) found that DOJ used 11 facial recognition systems and an unspecified number of state and local systems and regularly contracted with non-federal entities for facial recognition services. The report stated, “DOJ also reported plans to expand its use of FRT through fiscal year 2023.”

In September 2023, GAO released a new report on the use of FRT at seven component agencies within DOJ and the Department of Homeland Security (DHS). GAO found that most law enforcement officers at these agencies were not required to take any training before they were authorized to use FRT. At the FBI, for example, only ten staff members had completed facial recognition training out of the 196 staff members that used FRT at the agency. GAO also found that four agencies (FBI; Customs and Border Protection; the Bureau of Alcohol, Tobacco, Firearms and Explosives; and the Drug Enforcement Administration) did not have guidance or policies specific to FRT that addressed civil rights and civil liberties. GAO recommended that DOJ develop a plan to issue a facial recognition technology policy addressing safeguards for civil rights and civil liberties.

We are deeply concerned that facial recognition technology may reinforce racial bias in our criminal justice system and contribute to arrests based on faulty evidence. Errors in facial recognition technology can upend the lives of American citizens. Should evidence demonstrate that errors systematically discriminate against communities of color, then funding these technologies could facilitate violations of federal civil rights laws. In light of these concerns, we ask the Department of Justice to address the following questions by no later than February 29, 2024:

  1. Has DOJ analyzed the extent to which federal grant recipients who use facial recognition technology and other forms of biometric technology are complying with or violating the Civil Rights Act of 1964 or other federal civil rights laws? 
  2. What practices and policies does DOJ have in place to ensure that its programs audit new biometric technologies, engage in proper oversight of their deployment, and do not violate any relevant constitutional or statutory federal civil rights protections?
  1. Does DOJ engage in interagency coordination with regard to Title VI compliance for programs receiving funding for facial recognition tools and other biometric technologies? If so, in what forms? 
  2. Has DOJ analyzed whether facial recognition technology or other biometric technologies that are operated or used by any federal, tribal, state, or local government agency result in a disparate impact on or disparate treatment of any group of Americans on the basis of race, color, or national origin? If so, what was the result of the analysis? Has the DOJ audited uses of facial recognition tools for instances of misidentifications and wrongful arrests, including for disparities affecting specific demographic groups?
  3. What, if any, DOJ or FBI training is provided on the use of facial recognition technology or other biometric technologies to grant recipients to ensure compliance with Title VI or other federal laws, as applicable? What, if any, DOJ training is provided to state and local law enforcement agencies that receive facial recognition results or results from other biometric technologies from federal law enforcement agencies?
  4. What does DOJ believe is the scope of DOJ’s legal authority to issue Title VI regulations pertaining to the funding of facial recognition tools and other biometric technologies?
  5. Does DOJ have policies in place regarding the collection, use, storage, and/or disposal of personal information acquired without consent in training and operating facial recognition technology or other biometric technologies? Does it require such policies from recipients of DOJ funds?
  6. What, if any, DOJ policies or trainings exist with respect to applicable Fourth Amendment protections, including any limitations on the use of facial recognition technology or other biometric technologies as the sole basis for identifying, surveilling, detaining, or arresting individuals?

Thank you for your shared commitment to upholding the rights of every American. We look forward to your prompt response.

Sincerely, 

###

Print
Share
Like
Tweet