A secretive New York artificial intelligence startup named Clearview AI vaulted into the public consciousness almost overnight five weeks ago, when The New York Times detailed how it built a searchable photo database using billions of images scraped from social media sites.
The report drew concerns from Facebook, Google and Twitter, which said the practice violated their terms of service, but privacy advocates seemed particularly unnerved by how Clearview was marketing its compilation: as a tool for hundreds of law enforcement agencies across the world, including the FBI and the Department of Homeland Security. Although many experts argue facial recognition technology is nowhere near ready for prime time, many police and public safety entities believe Clearview’s systems could help identify suspects or crime victims.
The full roster of Clearview clients remains unknown — except, for now, to at least one person.
The Daily Beast reports that Clearview recently notified customers about a breach of its system in which an intruder stole a complete list of its clients.
The notice also said the hack accessed the number of accounts set up by customers, as well as the number of searches they conducted. It did not include individual agencies’ search histories or compromise Clearview’s servers.
The company’s attorney said the flaw was patched and called data breaches “part of life in the 21st century.”
But observers said the hack could give potential new customers second thoughts about how safe its information would be with Clearview.
And it also underscores the privacy threats posed by data collection and AI as both technologies continue to grow rapidly.