Skip to content

IBM promised to back off facial recognition — then it signed a $69.8 million contract to provide it

IBM has returned to the facial recognition market — just three years after announcing it was abandoning work on the technology due to concerns about racial profiling, mass surveillance, and other human rights violations.

In June 2020, as Black Lives Matter protests swept the US after George Floyd’s murder, IBM chief executive Arvind Krishna wrote a letter to Congress announcing that the company would no longer offer “general purpose” facial recognition technology. “The fight against racism is as urgent as ever,” he wrote. “IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency.” Later that year, the company redoubled its commitment, calling for US export controls to address concerns that facial recognition could be used overseas “to suppress dissent, to infringe on the rights of minorities, or to erase basic expectations of privacy.”

Despite these announcements, last month, IBM signed a $69.8 million (£54.7 million) contract with the British government to develop a national biometrics platform that will offer a facial recognition function to immigration and law enforcement officials, according to documents reviewed by The Verge and Liberty Investigates, an investigative journalism unit in the UK.

A contract notice for the Home Office Biometrics Matcher Platform outlines how the project initially involves developing a fingerprint matching capability, while later stages introduce facial recognition for immigration purposes — described as “an enabler for strategic facial matching for law enforcement.” The final stage of the project is described as delivery of a “facial matching for law enforcement use-case.”

The Home Office Biometrics Matcher Platform includes “strategic” matching of photos in a database

The platform will allow photos of individuals to be matched against images stored on a database — what is sometimes known as a “one-to-many” matching system. In September 2020, IBM described such “one-to-many” matching systems as “the type of facial recognition technology most likely to be used for mass surveillance, racial profiling, or other violations of human rights.”

IBM spokesman Imtiaz Mufti denied that its work on the contract was in conflict with its 2020 commitments. “IBM no longer offers general-purpose facial recognition and, consistent with our 2020 commitment, does not support the use of facial recognition for mass surveillance, racial profiling, or other human rights violations,” he said.

“The Home Office Biometrics Matcher Platform and associated Services contract is not used in mass surveillance. It supports police and immigration services in identifying suspects against a database of fingerprint and photo data. It is not capable of video ingest, which would typically be needed to support face-in-a-crowd biometric usage.”

Human rights campaigners, however, said IBM’s work on the project is incompatible with its 2020 commitments. Kojo Kyerewaa of Black Lives Matter UK said: “IBM has shown itself willing to step over the body and memory of George Floyd to chase a Home Office contract. This won’t be forgotten.”

Matt Mahmoudi, PhD, tech researcher at Amnesty International, said: “The research across the globe is clear; there is no application of one-to-many facial recognition that is compatible with human rights law, and companies — including IBM — must therefore cease its sale, and honor their earlier statements to sunset these tools, even and especially in the context of law and immigration enforcement where the rights implications are compounding.”

“There is no application of one-to-many facial recognition that is compatible with human rights law.”

Police use of facial recognition has been linked to wrongful arrests in the US and has been challenged in the UK courts. In 2019, an independent report on the London Metropolitan Police Service’s use of live facial recognition found there was no “explicit legal basis” for the force’s use of the technology and raised concerns that it may have breached human rights law. In August of the following year, the UK’s Court of Appeal ruled that South Wales Police’s use of facial recognition technology breached privacy rights and broke equality laws. The force paused its use of facial recognition after the verdict but has since resumed using the technology.

Other tech firms have imposed partial bans on the use of their facial recognition services for law enforcement. In the days after IBM declared its plans to leave the facial recognition sector, Amazon and Microsoft both announced moratoriums on the sale of their facial recognition services to police departments in the US.

Amazon initially announced a one-year moratorium on police use of its Rekognition software in June 2020 and said it would be extending the ban “indefinitely” the following year. A spokeswoman for the company confirmed that the moratorium, which prohibits “use of Amazon Rekognition’s face comparison feature by police departments in connection with criminal investigations,” is still in place.

Microsoft said in June 2020 that it would not sell facial recognition software to US police departments until a national law is introduced governing use of the technology. When contacted by The Verge and Liberty Investigates, a spokeswoman for Microsoft referred to the company’s website, which states that use of the Azure AI Face service “by or for state or local police in the US is prohibited by Microsoft policy.” 

The UK Home Office did not respond to a request for comment.

Source link