Clearview’s view of everyday dystopia

A clear view of the everyday dystopia

Please laugh, your photo could be passed around. Image: Endstation Jetzt, CC BY 2.0

Clearview AI’s image database, the end of anonymity and the surveillance machinery in the 21st century. Century. How a start-up abuses our photos

In early 2020, the first sensational revelation came: the U.S. company Clearview had collected billions of photos on the Internet for years for a database for biometric purposes and then sold them. The data has been distorted for identification purposes, first officially "only" to hundreds of law enforcement agencies in 27 countries, including 2.400 police stations. Later it turned out: also to private customers.

All images come from websites and social networks. They were generally taken without the knowledge and consent of the owners and the persons depicted. They are automatically analyzed and can be matched with fresh images from investigators, behavior and residence profiles of suspects are included for free.

Biometrics’ birth defect

Biometrics, say security experts, has one major drawback: no matter which feature you take, it exists only in a finite, small number. Once a fingerprint is archived, there are only nine unrecognized fingerprints left, each person has only two eyes for iris scanning, and there is only one copy of the face, handwriting, or typing behavior.

The ancient standard password, on the other hand, offers infinite functions, limited only by the user’s memory. Even a written and securely stored password is sometimes more secure than a fingerprint, which you leave everywhere, on glasses, doors, in buses and trains.

But that a photo upload from a party or from vacation was automatically used by investigators for biometric identification purposes, nobody could or wanted to imagine until last year.

Biometrics is nothing other than a very complex procedure similar to a password. Especially with facial recognition, companies use many factors to create a profile in the best sense of the word. Unlike passwords (or user names), some biometric features can also be perfectly exploited to monitor people. The more obvious they are, the more successful they are, preferably with faces, because they are very individual and can be easily recognized from a distance.

However, no one has ever considered the risk of surveillance when uploading pictures. This is more likely to be the case in countries where uncompromising governments use artificial intelligence and cameras, such as China. Almost every citizen in the People’s Republic is almost constantly monitored by cameras, automatically detected and sanctioned if they break the law.

Chinese conditions

All this seems unthinkable in western democracies. Especially not in Europe, where privacy and data protection are fundamental values and a GDPR has successfully put Google, Facebook or Microsoft in their place. The right to one’s own image thus prevents databases of profiles used for automated facial recognition, even if we are tempted to post masses of selfies.

A first complaint, filed in 2020 by Matthias Marx with the Hamburg data protection authority, only brought results after eleven months and statements by clear results. ("None of Your Business") is the non-governmental organization of Austrian Max Schrems, who became known for his lawsuits against Facebook and the Privacy Shield ruling.

In February, Hamburg’s Commissioner for Data Protection and Freedom of Information ruled that Marx had the right to delete the hash value thus generated. A "Minimum verdict" call this and expresses disappointment that the data protection authorities have not also banned the collection of data – not to mention the requirement to delete all photos of Marx.

Marx himself on this:

The existence of such a surveillance machine is frightening. Almost a year after my original complaint Clearview AI does not even have to delete the photos that show me. Worse, each:r individual affected must now file a complaint themselves. This shows that our data is not yet sufficiently protected and that there is a need for action against biometric surveillance.

Despite DSGVO little chance of success for consumers

Whoever wants to prohibit Clearview from collecting images of themselves according to the GDPR, must file an appeal and hope that. The accusation against the company, that apart from the images themselves, they also use all the metadata from them, and do not hesitate to conceal them, weighs heavily. The details only came to light in bits and pieces after the New York Times had disclosed business practices in January 2020. A few days ago, together with other organizations filed complaints with data protection authorities in France, Austria, Italy, Greece and Great Britain.

In Italy, privacy activists have already had to ban police forces from using data from the Clearview database for real-time facial recognition.

Alan Dahi, privacy lawyer at explained:

Just because something is online doesn’t automatically make it fair game for others to grab in any way they want – that’s not moral or legal. Data protection authorities must take action to prevent Clearview and similar organizations from accessing EU citizens’ personal data." The next 3 months will show whether these rights can be enforced – that’s how long regulators now have to give an initial response.

Proceedings also in the U.S

Clearview AI also has a number of lawsuits on its hands in the U.S., for example in California, where activist groups Mijente and NorcalResist are suing. In the West Coast state, police authorities are rarely squeamish when it comes to surveillance. After 2015, details of the project became known after a long dispute "Stingray" became known, a surveillance machine that resembled the science fiction film "Blue Thunder" comes close.

Microphones monitor bus stops or alert police if a shot is fired. It is part of everyday life in California that someone who is forbidden to be in a children’s playground at night is informed of his rights by helicopter and searchlights. Thanks to Clearview’s data, investigators can then also address him by name if, for example, the serial IMSI catcher installed on board fails. Even the New York Police Department had to admit to using Clearview’s data in 2021 after years of silence and denial.

Unlike in the EU, the California lawsuit refers to the fact that the "Clearview AI’s mass surveillance technology violates the privacy rights of people in California in general, and particularly harms immigrants and communities of color", reports, for example, the site

Clearview responds by pointing to the First Amendment to the U.S. catch-all, i.e., freedom of speech, freedom of religion, freedom of the press, freedom of assembly. The argument was not allowed to succeed in the EU, whether or not the terms of use of Facebook, Twitter or other services were violated. But perhaps the case will prove to be another touchstone for the GDPR and the EU’s ability to enforce European values and laws in other cultural spaces as well.

The reason from the Alt-Right scene

The background of the Clearview AI company also shows that this would be important. First there was company founder Hoan Ton-That, an Australian-Vietnamese entrepreneur who got his start in California’s grassroots IT scene with phishing and social media apps. Later he joined the network Angel.list. On his way to a new career as a model, he met co-grandfather and politician Richard Schwarz, who worked with New York Mayor Rudy Giuliani during the latter’s tenure and created the (unsuccessful) porn filter was one of the founders.

Among the early investors in Clearview AI, founded in 2017, are many familiar faces of the U.S. right, such as the founder of Paypal, Facebook and surveillance software maker Palantir, Peter Thiel. Although Clearview does not provide details, analysts suspect that the bulk of the $8.5 million Clearview received at the end of 2020, that is, after the disclosures described, came from Thiel.

Even more lawsuits in the USA

After the publication by the New York Times Clearview AI faced a wave of loo calls. Twitter, Youtube, Facebook and Google all sent requests to stop the unauthorized taking of images immediately.

A request from Alabama police to use data to investigate the storming of the Capitol in early January also caused a stir. The FBI refused to provide any information. But Clearview AI had no shortage of whimsical ideas: as early as 2020, Ton-That explained on television how facial recognition could be used for corona contact tracing.

After a massive data leak in early 2020, a lawyer for the company told the portal Daily Beast, Data mishaps are a fact of life "Part of life in the 21st century. Century". The declaration that they will no longer sell to private customers seems just as unreliable as there is no real way out of the database for individuals.

Markus Feilner has been working with Linux since 1994, was deputy editor-in-chief of the Linux magazine and the iX, Team leader for documentation at Linux manufacturer SUSE and specializes in documentation and OSI layers 8, 9 and 10 with his company Feilner IT.

Leave a Reply

Your email address will not be published. Required fields are marked *