Cop-used Clearview AI breached Australians' privacy, investigation finds
Image search tool trialled by police agencies.
Clearview AI, whose facial recognition database was trialled by Australian police agencies last year, breached Australian privacy rules with its operating model.
The company scrapes people’s photos from the web and then allows users to upload an image and search for a match against the database, which - in the case of policing - may help to identify a person of interest.
The Australian Federal Police (AFP), Victoria Police, Queensland Police Service and South Australia Police all took up free trials in early 2020.
They admitted to being triallists in the months following a news report on widespread use of the Clearview AI tool by law enforcement agencies (LEAs) and other organisations.
The Office of the Australian Information Commissioner (OAIC) and the UK’s Information Commissioner’s Office (ICO) opened a joint investigation into Clearview AI in July last year.
The OAIC found Clearview AI in breach of Australian privacy rules and ordered the company to “cease collecting facial images and biometric templates from individuals in Australia and destroy all facial images and biometric templates collected from individuals in Australia.”
It must also stop scraping the image data of Australian citizens from the web without their consent.
Clearview AI told the OAIC it had terminated “all of its trial users in Australia” by the end of March 2020 “and had instituted a policy of refusing all requests for accounts from Australia.”
“There is no evidence of new Australian trial users or account holders since March 2020,” the OAIC said. [pdf]
Clearview AI defended itself by arguing “that the information it handled was not personal information and that, as a company based in the US, it was not within the Privacy Act’s jurisdiction,” the OAIC said.
The company argued that any image “published without requiring a password or other security on the open web”, and downloadable from within the US, was fair game.
“The act of downloading an image in the United States of America cannot be considered as carrying on business in Australia,” it said.
However, the OAIC ruled the free trials for Australian law enforcement, and the “indiscriminate scraping” of Australians’ photos “clearly demonstrate that the respondent carries on business in Australia”.
“The fact that none of the Australian police agencies became paying customers is immaterial,” the OAIC said.
“The respondent’s activities were commercial in nature, and the evidence shows that the trials existed for the express purpose of enticing the purchase of accounts.”
Clearview AI also argued that its annual turnover is “less than $3 million”, however offered no evidence for this.
The OAIC added that it is “currently finalising an investigation into the Australian Federal Police’s trial use of the technology and whether it complied with requirements under the Australian Government Agencies Privacy Code to assess and mitigate privacy risks.”
By Ry Crozier
CONTINUED:
Clearview AI slammed for breaching Australians' privacy on numerous fronts
Despite uncovering Clearview AI's intrusive practices, Australia's Information Commissioner conceded that the number of Australians who have had their biometric information scraped by the company was unknown.
Australia's Information Commissioner has found that Clearview AI breached Australia's privacy laws on numerous fronts, after a bilateral investigation uncovered that the company's facial recognition tool collected Australians' sensitive information without consent and by unfair means.
The investigation, conducted by the Office of the Australian Information Commissioner (OAIC) and the UK Information Commissioner's Office (ICO), found that Clearview AI's facial recognition tool scraped biometric information from the web indiscriminately and has collected data on at least 3 billion people.
The OAIC also found that some Australian police agency users, who were Australian residents and trialled the tool, searched for and identified images of themselves as well as images of unknown Australian persons of interest in Clearview AI's database.
By considering these factors together, Australia's Information Commissioner Angelene Falk concluded that Clearview AI breached Australia's privacy laws by collecting Australians' sensitive information without consent and by unfair means. In her determination [PDF], Falk explained that consent was not provided, even though facial images of affected Australians are already available online, as Clearview AI's intent in collecting this biometric data was ambiguous.
"I consider that the act of uploading an image to a social media site does not unambiguously indicate agreement to collection of that image by an unknown third party for commercial purposes," the Information Commissioner wrote.
"Consent also cannot be implied if individuals are not adequately informed about the implications of providing or withholding consent. This includes ensuring that an individual is properly and clearly informed about how their personal information will be handled, so they can decide whether to give consent."
Read more: 'Booyaaa': Australian Federal Police use of Clearview AI detailed
Other breaches of Australia's privacy laws found by Falk were that Clearview AI failed to take reasonable steps to either notify individuals of the collection of personal information or ensure that personal information it disclosed was accurate.
She also slammed the company for not taking reasonable steps to implement practices, procedures, and systems to ensure compliance with the Australian Privacy Principles.
These breaches were due to Clearview AI removing access to an online form for Australians to opt out from being searchable on the company's facial recognition platform.
The form itself also contained privacy issues as it required Australians to submit a valid email address and an image of themselves which would then be converted into an image vector, which Falk said allowed Clearview AI to collect additional information about Australians.
The form was created at the start of 2020, but now Australians can only make opt-out requests via email, Falk said.
After making these findings, Falk has ordered Clearview AI to destroy existing biometric information it has collected from Australia. She has also ordered for the company to cease collecting facial images and biometric templates from individuals in Australia.
"The covert collection of this kind of sensitive information is unreasonably intrusive and unfair," Falk said.
"It carries significant risk of harm to individuals, including vulnerable groups such as children and victims of crime, whose images can be searched on Clearview AI's database."
Despite the investigation being finalised, the exact number of affected Australians is unknown. Falk expressed concern that the number was likely to be very large given that it may include any Australian individual whose facial images are publicly accessible on the internet.
Providing an update on another Clearview AI-related investigation, Falk said she was currently in the process of finalising a separate investigation into the Australian Federal Police (AFP) trialling Clearview AI's facial recognition tool.
In April last year, the AFP admitted to trialling the Clearview AI platform from October 2019 to March 2020. State police from Victoria and Queensland also trialled the tool, with all three law enforcement agencies admitting to successfully conducting searches using facial images of individuals located in Australia with the tool.
Falk said she would provide a determination regarding whether the AFP breached the Australian Government Agencies Privacy Code to assess and mitigate privacy risks soon.
CONTINUED:
Victoria Police emails reveal Clearview AI's dodgy direct marketing
Why bother with messy official approvals, tedious legal and privacy assessments, or even ethics when cops use facial recognition? 'Feel free to run wild with your searches,' says Clearview
Controversial facial recognition service Clearview AI has been marketing directly to individual Australian police officers, encouraging them to make 100+ searches during free trials, and to refer the service to their colleagues.
In emails to Victoria Police [PDF] obtained under freedom of information laws by analyst Justin Warren last week, "Team Clearview" signed up at least six of the state's officers between November 2019 and March 2020.
Clearview's promotional language seems more suited to a consumer social media app than a law enforcement tool.
"Clearview is like Google Search for faces. Just upload a photo to the app and instantly get results from mug shots, social media, and other publicly available sources," said one email to a police intelligence analyst.
"Search a lot. Your Clearview account has unlimited searches," encouraged a follow-up email.

A typical email from Clearview AI to a new trial user at Victoria Police.
Image: Victoria Police"Don't stop at one search. See if you can reach 100 searches. It's a numbers game. Our database is always expanding and you never know when a photo will turn up a lead," it said.
"Take a selfie with Clearview or search a celebrity to see how powerful the technology can be."
Another email was even more enthusiastic. "Feel free to run wild with your searches. Test Clearview to the limit and see what it can do," it said.
Victoria Police is distancing itself from Clearview, telling the Guardian that only a small number of email addresses were registered, it was not used in any investigations, and police have discontinued using it.
"Victoria Police uploaded a small number of publicly available stock images to Clearview AI to test the technology. No images linked to any investigation by Victoria Police were uploaded as part of this testing process," a police spokeswoman said.
CLEARVIEW AI IS A CONTROVERSY MAGNET WITH FAR-RIGHT LINKS
Clearview, founded by Australian hacker and entrepreneur Hoan Ton-That, is no stranger to controversy. When its customer database was leaked in February this year, its response was cavalier.
"Security is Clearview's top priority," the company said through its lawyer.
"Unfortunately, data breaches are part of life in the 21st century. Our servers were never accessed. We patched the flaw and continue to work to strengthen our security."
This data breach led to the revelation that the Australian Federal Police had also used Clearview.
Seven officers from the Australian Centre to Counter Child Exploitation (ACCCE) had conducted searches, yet no one outside the ACCCE Operational Command knew this trial had commenced.
Police services in Queensland, Victoria, and South Australia were also on the list.
Clearview's strategy seems clear, at least in your correspondent's view: Get individual cops hooked, so investigations become dependent on the service before higher-ups have had the chance to consider the legal, privacy, and ethical issues.
No wonder police forces have been cagey about whether they're using the technology.
An even greater cause for concern, however, is Clearview's links to far-right political forces in the US -- links they seem eager to hide.
A detailed investigation by the Huffington Post published in April joined the dots connecting Ton-That with self-proclaimed neo-Nazi hacker Andrew 'weev' Auernheimer, pro-Trump propagandist Mike Cernovich of Pizzagate conspiracy theory fame, and many others.
"In this far-right clique, two of Ton-That's associates loomed larger than most thanks to their close connection to billionaire Peter Thiel, a Facebook board member and Trump adviser: Jeff Giesea, a Thiel protégé and secret funder of alt-right causes, and Charles 'Chuck' Johnson, a former Breitbart writer and far-right extremist who reportedly coordinated lawfare against media organizations with Thiel," HuffPost wrote.
Johnson reportedly introduced Ton-That to someone as "a gifted coder he'd hired to build the facial recognition tool".
"Around the same time, Johnson stated on Facebook that he was 'building algorithms to ID all the illegal immigrants for the deportation squads'," HuffPost wrote.
If the 3600-word report can be summarised at all, it's this: Clearview is linked, somehow, both to far-right racist politics and increasingly to law enforcement agencies around the world, and it wants to hide those links.
Clearview also seems to show little concern for playing by the rules.
WHERE DO THESE PHOTOGRAPHS COME FROM EXACTLY?
As detailed in The New York Times in February, Clearview had a database of 3 billion photos, collected from websites such as YouTube, Facebook, Venmo, and LinkedIn.
As reported by sister site CNET, tech giants like Google, Facebook, and Microsoft have sent Clearview AI cease-and-desist letters for scraping images hosted on their platforms.
Australian privacy commissioner Angelene Falk wants to know whether data on Australians has been collected.
In the wake of the US protects against racist law enforcement practices, major players including IBM, Amazon, and Microsoft have halted the sale of facial recognition tech to American police forces. The technology is now only a small part of their overall offerings.
But for Clearview, cops and photos are the main game.
Clearview AI is clearly a company to watch, but not in a good way.
By Stilgherrian
CONTINUED:
Commissioner Rules Clearview AI Breached Australians’ Privacy
Remember that facial recognition startup found being used by law enforcement agencies around the world last year? Well today, Australia’s Privacy Commissioner has ruled they breached the country's privacy laws. It was ruled Clearview AI breached Australians’ privacy by scraping their biometric information from the web and disclosing it through a facial recognition tool.
The ruling by the Office of the Australian Information Commissioner (OAIC) was coming – they kicked off the inquiry back in July last year, alongside their UK counterparts. And on Wednesday, the commissioner declared Clearview AI breached the Australian Privacy Act on multiple fronts, by:
- collecting Australians’ sensitive information without consent
- collecting personal information by unfair means
- not taking reasonable steps to notify individuals of the collection of personal information
- not taking reasonable steps to ensure that personal information it disclosed was accurate, having regard to the purpose of disclosure
- not taking reasonable steps to implement practices, procedures and systems to ensure compliance with the Australian Privacy Principles.
The controversial tech startup shocked the world when it was revealed it had scraped images on the internet for faces, entered them into its facial recognition database and provided them to law enforcement officials worldwide to search.
Clearview AI’s facial recognition tool includes a database of more than three billion images taken from social media platforms and other publicly available websites.
As a result of its actions, Australia’s privacy watchdog issued Clearview AI with determination orders. These orders require that Clearview AI cease collecting facial images and biometric templates from individuals in Australia, and to destroy existing images and templates collected from Australia.
According to the OAIC, its determination highlights the lack of transparency around Clearview AI’s collection practices, the monetisation of individuals’ data for a purpose entirely outside reasonable expectations and the risk of adversity to people whose images are included in their database.
Privacy Commissioner Angelene Falk said the covert collection of this kind of sensitive information is “unreasonably intrusive and unfair”.
“It carries significant risk of harm to individuals, including vulnerable groups such as children and victims of crime, whose images can be searched on Clearview AI’s database,” she said.
“By its nature, this biometric identity information cannot be reissued or cancelled and may also be replicated and used for identity theft. Individuals featured in the database may also be at risk of misidentification.”
She said the practices undertaken by Clearview AI fall well short of Australians’ expectations for the protection of their personal information. She also said the privacy impacts of Clearview AI’s biometric system were not necessary, legitimate and proportionate, nor did they have regard to any public interest benefits.
It’s not over, however, the OAIC is currently finalising an investigation into the Australian Federal Police’s trial use of the technology. In April last year, the AFP admitted to using Clearview AI, despite not having an appropriate legislative framework in place, to help counter child exploitation.