
Members of the public and local officials criticized the Long Beach Police Department’s continued use of facial recognition software at a Friday public safety committee hearing.
A city council subcommittee even recommended that the technology be banned over concerns about its accuracy.
“Imagine the chaos when the LBPD starts getting alerts saying the suspect was spotted by a camera downtown, but 9 out of 10 times, they’re being sent out on a false alert,” said one speaker at the Friday meeting.
“It could easily get someone killed one day.”
Local attorney Greg Buhl wrote in a letter to the committee, “Before the City gives the green light to LBPD’s continued and unchecked use of facial recognition, the inherent equity and racial justice issues associated which must be addressed.”
But Long Beach’s assistant chief of police, Wally Hebeish, told the panel the facial recognition software is only being used in limited ways right now. The police use the software to generate possible leads in criminal investigations.
He also said supervisors should be trusted to serve as safeguards.
“Our policy does mandate that the supervisor overseeing the use of that system does ensure that it is being used appropriately and takes steps if it is not,” Hebeish said.
A subcommittee within the city’s Technology and Innovation Commission expressed concern that the city had no “independent auditing entities.”
In a report released ahead of the Friday meeting, the subcommittee recommended the city ban the use of this type of software by the police with “possible consideration of narrowly defined and limited exception(s).”
The report stated, “the subcommittee finds that current facial recognition technologies are not only insufficiently accurate but pose substantive and unequal risk to Black residents and residents of color due to inherent algorithmic biases that have not been effectively addressed in software design.”
The findings are not new to critics of facial recognition software. Research from 2018 found that some algorithms misclassified Black women nearly 35 percent of the time but almost always correctly identified white men.
There have been improvements in the accuracy of facial analysis software.
The National Institute of Standards and Technology tested 127 software algorithms used for facial recognition from 39 different developers and found that software got 20 times better at recognizing faces between 2014 and 2018 alone. The testing showed software will miss a true match only 0.2 percent of the time.
The Department of Homeland Security, however, found that commercial biometric systems are less accurate in identifying people with darker skin color.
Tech publication Wired quoted a July 2019 report for the National Institute of Standards and Technology that agreed there was a racial disparity in accurate results. The report wrote, “White males ... is the demographic that usually gives the lowest FMR,” or false match rate.
“Black females ... is the demographic that usually gives the highest FMR.”
The Long Beach Post reported that the three city council members -- Suely Saro, Roberto Uranga and Suzie Price -- did not publicly take a stance at the meeting.
The Technology and Innovation Commission is scheduled to consider the report on Wednesday at 3:30 p.m.