Identity verification company ID.me uses a type of powerful facial recognition that searches for individuals within mass databases of photos, CEO Blake Hall explained in a LinkedIn post on Wednesday.
The post follows a news release from the company last week stating directly that: “Our 1:1 face match is comparable to taking a selfie to unlock a smartphone. ID.me does not use 1:many facial recognition, which is more complex and problematic.” Hall’s post on Wednesday confirms that ID.me does indeed use 1:many technology.
Privacy advocates say that both versions of facial recognition pose a threat to consumers. In addition to numerous studies demonstrating the technology is less accurate on non-White skin tones, amassing biometric data can prove a huge security risk.
“Governments and companies are amassing these databases of your personal biometric information, which unlike databases, of credit cards, cannot be replaced,” explained Caitlin Seeley-George, campaign director at nonprofit Fight for the Future. “And these are databases that are highly targeted by hackers and information that can absolutely be used in ways that are harmful to people.”
In the Wednesday LinkedIn post Hall said that 1:many verification is used “once during enrollment” and “is not tied to identity verification.”
“It does not block legitimate users from verifying their identity, nor is it used for any other purpose other than to prevent identity theft,” he writes.
“We avoid disclosing methods we use to stop identity theft and organized crime as it jeopardizes their effectiveness,” Hall writes. He pointed to a recent indictment of a New Jersey man last week by the FBI after the man bypassed ID.me’s verification system and stole nearly a million dollars in unemployment benefits.
The LinkedIn post follows internal discussions expressing concerns that the company’s public statements had been inaccurate.
“We could disable the 1:many face search, but then lose a valuable fraud-fighting tool. Or we could change our public stance on using 1:many face search,” an engineer wrote in a message posted to a company Slack channel on Tuesday. “But it seems we can’t keep doing one thing and saying another as that’s bound to land us in hot water.”
The internal messages, obtained by CyberScoop, also imply that the company discussed the use of 1:many with the IRS in a meeting.
“I was in a conversation with the IRS on 1/19 where we explicitly discussed using AWS Recognition for 1:many face search,” the engineer wrote. “This seems like it could be troublesome so I wanted to post this here to discuss next steps.”
While ID.me lists Paravision and iProov as biometric technology partners, it does not appear to disclose its relationship with Amazon’s facial recognition product. Amazon isn’t mentioned in a recently released white paper explaining ID.me’s technology.
Seeley-George says the fact that ID.me mislead the public about the type of technology it is using raises serious concerns about government agencies using facial recognition technology.
“This is just another example where ID.me is falsely portraying its tool and how it will be used on millions of people,” said Seeley-George. “This is not the type of product that we should be asking millions of people to be using if they have to lie about what it’s doing.”
Jay Stanley, senior policy analyst with the American Civil Liberties Union’s Speech, Privacy and Technology Project said that the use of 1:many raises even more questions about the technology’s accuracy, what determines if an individual is placed on a blocklist and what due process is in place for blocked individuals.
“They say that it’s not tied to identity verification, that it does not block human users from identifying their identity, but if you are somebody on that list, how can it not cause you problems?” He said.
In reference to its 1:1 matching, ID.me stated in a press release that “there was no statistically significant difference in the propensity to pass or fail the face match step across demographic groups, including groups with different skin tones, as corroborated by NIST, ID.me, and a state government agency.” The company’s 1:1 or 1:many facial recognition technologies have not been made available for public auditing.
“It’s all very opaque,” Stanley said. “And that’s a recipe for injustice.”
Multiple studies have shown that, even in ideal lab conditions, facial recognition technology disproportionately results in false positives when used on people of color. Some lawmakers have called for a ban on the use of facial recognition technology by law enforcement because of substantial inaccuracy issues leading to false arrest.
The new information adds to growing scrutiny over the IRS’s recently announced decision to use the technology to verify credentials for its online web portal. The move has been questioned by privacy advocates as well as lawmakers, including Sen. Ron Wyden (D-Ore.).
Concerns with the company aren’t new. Users navigating the company’s technology to receive unemployment benefits during the pandemic reported hours-long wait for verification, incorrectly rejected matches, and waiting months to rectify denials.
Cybersecurity reporter Brian Krebs reported a similar experience using the company’s technology to create IRS credentials through the system.
In addition to the IRS, ID.me has contracts with the Veteran’s Affairs Department and Social Security Administration.
The IRS and ID.me did not respond to questions sent by CyberScoop.
Updated 1/26/22: To include additional information about internal company discussions and comment from the ACLU.
The post ID.me CEO backtracks on claims company doesn’t use powerful facial recognition tech appeared first on CyberScoop.