Business News Headline Stories

Clearview AI, a startup that sells a facial recognition system to law enforcement and businesses, has been called “alarming” and “dystopian.” Its technology lets clients match faces to billions of images in a database, eroding most anonymity people have in public.

Over the past several weeks, the company has been the subject of a number of unflattering headlines. Each one raises serious privacy concerns about facial recognition technology and its growing adoption.

On Wednesday, Clearview AI confirmed to Fortune that hackers had stolen its entire client list, which BuzzFeed later reported included 2,200 law enforcement agencies, U.S. Immigration and Customs Enforcement, and the U.S. Department of Justice. The company’s clients outside of law enforcement have included Walmart and the NBA.

Then on Friday, Apple removed Clearview AI’s app from its app store after determining that Clearview had violated its policies. Clearview had listed the app under Apple’s Enterprise program, which is typically only for apps that are meant to be distributed to a workforce within one company.

Here’s what you need to know about Clearview AI:

How does Clearview AI work?

Clearview AI has a database of 3 billion public images scraped from social networks, such as Facebook, Instagram, Twitter, and YouTube that customers can compare their images to. If police want to identify suspects in a fight, for example, they can feed images of those suspects’ faces into Clearview to get results showing any public photos of them, along with links to social network profiles or websites from where those photos originated. Additionally, police can build a profile about the suspects based on what is posted online about them.

Why it matters

Freddy Martinez, a policy analyst at Open the Government, a non-partisan coalition that advocates for government transparency, first learned about Clearview AI last year while working with nonprofit news site MuckRock on a project about law enforcement’s use of facial recognition technology.

In the past, police have mostly relied on mugshots to identify suspects, he says. More recently, they’ve moved into the digital age by tapping databases and artificial intelligence that does most of the work. “For the last few years, people have been warning as the technology gets cheaper and faster, how you can expand the data sets for relatively no cost,” Martinez tells Fortune. “What Clearview did was something quite unheard of.”

The company says on its website that its tools are used for “search, not surveillance” and that they help keep communities safe by identifying predators.

Still, experts and politicians have said that the technology raises privacy concerns. This includes making peoples’ data available without their consent and the company’s failure to secure its trove of photos after admitting last week that it had been hacked.

“This is a company whose entire business model relies on collecting incredibly sensitive and personal information, and this breach is yet another sign that the potential benefits of Clearview’s technology do not outweigh the grave privacy risks it poses,” Sen. Ed Markey (D-Mass.), says in a statement.

Tor Ekeland, an attorney for Clearview, told Fortune last week that “security is a top priority.” He adds: “Unfortunately, data breaches are part of life in the 21st century. Our servers were never accessed. We patched the flaw, and continue to work to strengthen our security.”

Not everyone in government or tech is a fan

Facebook, Twitter, and Google have already sent Clearview AI letters asking it to stop scraping the photos on their sites. Some politicians are also sounding the alarm. On January 24, New Jersey’s attorney general ordered the state’s law enforcement to stop using Clearview AI over concern about the company’s techniques.

Sen. Markey sent Clearview CEO Hoan Ton-That a letter on January 23, along with a list of questions he hoped to have the CEO answer.

“Clearview’s product appears to pose particularly chilling privacy risks,” Markey writes. “And I am deeply concerned that it is capable of fundamentally dismantling Americans’ expectation that they can move, assemble, or simply appear in public without being identified.”

Martinez, from Open the Government, adds that it’s vital that the federal and local governments continue to ask questions about this technology. “With a little oversight and transparency, people are finding this tech isn’t appropriate,” he says.

The right to be forgotten

Under a new privacy law, California residents can request to see the data that Clearview has about them, choose to opt out of its database, and ask the company to delete anything they have about them.

Residents of the European Union, along with Swiss and British residents, can also request their data from Clearview, opt out for the future, and request their data be deleted.

As for everyone else? It appears what’s out there will remain, at least for now.

Nico Fischbach, chief technology officer at security company Forcepoint, tells Fortune that Clearview’s business is an example of how “it takes just one person to make data public.”

“If you put something on public social media, whatever it is, you create a reference point to yourself,” he says. He adds that tagging friends also creates an even better picture of who is in a person’s social circle.

“Everybody lives a bit in their own social media bubble,” Nischbach says. “This has helped people realize there is no fence around the data they share on public social media.”

More must-read stories from Fortune:

—How 5G promises to revolutionize farming
—Did the ‘techlash’ kill Alphabet’s city of the future?
—College backlash against facial recognition technology grows
In A.I., what would Jesus do?
Coronavirus is giving China cover to expand its surveillance. What happens next?

Catch up with Data Sheet, Fortune’s daily digest on the business of tech.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: