Facial recognition is under fire again. Privacy rights group Big Brother Watch filed a legal complaint against supermarket chain Southern Co-op in July, accusing the retailer of using the technology without customers’ consent.
However, the battle over facial recognition is much bigger than privacy lobby groups and grocery stores duking it out in the courts or trying to sway public opinion. In the end, it’s about how much surveillance we should accept in our daily lives.
The question has grown more pertinent in recent years as the facial recognition market is expected to be worth $20.5bn by 2031, according to a market forecast report by Transparency Market Research.
The technology has the potential to change how personal identities are verified and data collected through biometric profiles.
“Retailers have on average over 30 cameras per store in the UK,” Simon Randall, CEO of video privacy and security company, Pimloc, tells Verdict. “These camera feeds are being collected and analysed at a break taking scale.”
So, where does that leave facial recognition in the ongoing battle between shops and privacy advocates?
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataThe Big Brother Watch vs Co-op debate
Big Brother Watch has a massive problem with the Southern Co-op. The privacy group claims that the retailer uses facial recognition software in their CCTV cameras, taking biometric scans of shoppers, and uses the scans in an unethical way.
The NGO says the use is “unlawful” and “Orwellian in the extreme“. In case it wasn’t clear; them be fightin’ words.
The Big Brother Watch complaint to the Information Commissioner’s Office (ICO) was sent via the group’s lawyers at data rights firm AWO.
The campaign group’s legal and policy officer Madeleine Stone claims the Southern Co-op’s use of facial recognition deprives shoppers of their right to privacy in a way that’s likely to breach data protection laws.
Since biometric data comes with more security risks, it is classed as special category person data under the General Data Protection Regulation (GDPR).
The law gives the regulator the power to impose a civil monetary penalty on data controllers up to £17m or 4% of the global turnover if they’re found to be in breach of the legislation.
However, Big Brother Watch believes the risks are too big to ignore even if the ICO isn’t swayed to fine the Southern Co-op.
“Innocent people could be placed on secret watchlists without due process, meaning they could be spied on, blacklisted in multiple stores, and even denied food shopping,” Stone tells Verdict. “This use of facial recognition technology is deeply chilling and we urge the public to boycott the Southern Co-op whilst it continues to spy on shoppers.”
Big Brother Watch believes the supermarket’s cameras represent a severe security concern. The group complained about how the Southern Co-op uses facial recognition software from Chinese state owned firm Hikvision. The firm’s software has been associated with cameras with security issues. Hikvision is currently banned in the US.
In July, 67 MPs and members of The House of Lords called for a government ban on Hikvision and Dahua cameras in the UK, something Big Brother Watch was quick to embrace. The MPs called for the ban, condemning “[Hikvision and Dahua’s] involvement in technology-enabled human rights abuses in China”. A ban is yet to be put in place, as of August.
Retail security firm, Facewatch supplies the cameras used by the Southern Co-op. They use “cloud-based facial recognition system safeguards businesses against crime creating a safer environment for customers and colleagues. The system will send an alert the instant a subject of interest enters your premises,” according to the Facewatch website.
Southern Co-op said their facial recognition technology is justified
The Southern Co-op has 35 stores across England that use the cameras. The cameras are located in Portsmouth, Chichester, Bristol, Southampton, Bournemouth, London, and Brighton and Hove. These cameras allow shoppers’ photos to be shared within an eight-mile radius from where the photos were taken in London or within a 46-mile radius in rural locations, Big Brother Watch claims.
A spokesperson for Southern Co-op says the chain believes the use of facial technology is justified to prevent violent attacks or abuse in its stores. That being said, the retailer would welcome any input from the ICO.
“We would welcome any constructive feedback from the ICO as we take our responsibilities around the use of facial recognition extremely seriously and work hard to balance our customers’ rights with the need to protect our colleagues and customers from unacceptable violence and abuse,” a Southern Co-op spokesperson tells Verdict.
“The safety of our colleagues and customers is paramount and this technology has made a significant difference to this, in the limited number of high risk locations where it is being used. Signage is on display in the relevant stores. As long as it continues to prevent violent attacks, then we believe its use is justified.”
The Southern Co-op argues that it only put people on the blacklist if the retailer has evidence of them previously engaging in a criminal activity or anti-social behaviour. The list also includes people who may have been banned from their stores. The facial recognition technology is used to detect their presence and alert staff.
The Southern Co-op claims to only be holding the images for a year from alleged perpetrator’s last offence. The chain invites individuals to submit a subject access request to check if this applies to them.
Any images of shoppers who are not on the supermarket’s list, are only kept for three days if a crime is committed on a store’s premises.
“We have extensive procedures in place to mitigate any risks, and work with our facial recognition technology provider to ensure the facial recognition system is secure and is GDPR compliant.”
Bence Jendruszak, COO and co-founder of safer digital transactions firm SEON, understands the benefits of this technology, given its ability to mitigate shop lifting or staff incidents. He does, however, welcome more scrutiny into these solutions.
“It’s important that the technology is properly scrutinised as it becomes more widely adopted,” Jendruszak tells Verdict. “That is what the Big Brother Watch campaign is doing here.”
Facial recognition provokes privacy and security concerns
The Southern Co-op isn’t alone in using facial recognition technology; other companies have adopted it too. The growing list of organisations using the technology has perpetuated an expanding wave of concerned privacy advocacy groups. There are several reasons why they are concerned.
“It can provoke a lot of suspicion and anxiety,” Natalie Cramp, CEO of data tech consultancy Profusion, tells Verdict. “It’s easy to see how it can be misused in an intrusive and oppressive way.”
At least 72% of people are worried about how their biometric data might be misused or stolen, according to research from business app discovery and analyst platform GetApp. Twenty-five percent do not trust this technology to use or store their data in a legal and safe way, fearing third party or criminal use of the data.
Some are also concerned that criminals could hack software and steal their biometric data.
“If a breach did occur, then criminals would have a powerful new tool for blackmail, extortion and identity fraud,” warns Jendruszak.
Mistaken identify could lead innocent people to be accused of crimes
One of the more serious risks of using facial recognition is that of mistaken identity. There have been several examples of the technology erroneously identifying innocent individuals of having committed crimes.
Big Brother Watch’s research suggests that 87% of facial recognition matches found in the Met Police is of misidentified people.
Not only could that potentially land innocent people in jail, but it could also lead to some criminals getting away with their crimes.
In 2018, a American man was accused and arrested for stealing a pair of socks from an American TK Max store. He was arrested as “a security camera photo using facial recognition” was given as evidence, reported Wired.
The image, allegedly proving the man’s identity, was only verified by asking a witness to confirm. This witness was the store security guard, who was asked to confirm the man they had charged was the man via text.
The accused man said it was not him in the surveillance image as he was at a nearby hospital, attending the birth of his son.
British police already uses similar software. The use of automated facial recognition by South Wales Police was approved after it was ruled as “consistent with the requirements of the Human Rights Act and the data protection legislation” by the High Court back in 2019.
The High Court also said that personal data was immediately deleted after processing, if someone’s image did not match someone listed on the police watchlist.
This case followed legal action from a former Liberal Democrat councillor Ed Bridges. He took action after the police’s facial recognition systems captured his images twice. The councillor claimed it had breached his privacy and caused him distress.
In response to the ruling, Bridges called facial recognition a sinister technology, that was used with people’s consent, breaching privacy and said he will fight until people were freed from this “disproportionate government surveillance.”
The risk of bias and discrimination
There is also the potential for bias in facial recognition systems to lead to discrimination of marginalised groups.
Ella Jakubowska, policy advisor at European Digital Rights, tells Verdict that the use of these systems for deciding who can enter a supermarket is a form of trial by a discriminatory algorithm.
“It short circuits the presumption of innocence, by replacing it with a presumption of guilt,” argues Jakubowska. “We know that people of colour and other marginalised groups are going to be the most affected as a result.”
Jakubowska believes biometric data is sensitive and its use could lead to the suppression of people’s free expression and civil liberties.
“It can also be used to stratify and discriminate against people,” she adds. “That’s why these data already have high protections in European law. We shouldn’t be putting shoppers sensitive data at such risks when there are many other effective and safer solutions to tackle shoplifting or keep supermarket staff safe.”
Customers raise concerns over lack of consent
Privacy advocacy groups like Big Brother Watch and European Digital Rights aren’t the only ones protesting the use of the technology. Customers are also arguably growing wary of the technology.
Research from Get App shows that 85% of people wanted to have the right to opt out of private companies using facial recognition technology on them. Eighty-seven per cent of people want the right to know if a company prosses their data along with the right to demand the deletion of that data.
“It is essential that companies using this sensitive information for their services put in place strong safeguards,” David Jani, content analyst at GetApp UK, tells Verdict. “This ensures it is stored securely and to be transparent about how users’ data is accessed as well as where they may opt-out of its use.”
Cramp argues that people must give their consent and that companies must clearly communicate with them to ensure that they do so.
“This includes being incredibly transparent and clear with what and why they are using this tech in the first place,” says Cramp. “Failing to inform people or not giving them a way to consent to be monitored will immediately undermine trust and could severely damage an organisation’s reputation.”
The technology is not always perfect
Facial recognition technology has two weaknesses that could affect its reliability: cameras and databases.
Starting with cameras, facial recognition relies on pixels to collect the information from images or videos that it needs.
“The fewer the pixels available to identify the image – due to distance, image quality or even environmental conditions – the less accurate the process is,” Neil Shanks, director at Corps Consult, Corps Security’s advisory unit, tells Verdict.
Camera angles, lighting, proximity and video quality can effect how well the tech recognises faces. This causes the facial recognition results to be less reliable. The technology could also be confused by disguises or people just slightly changing their appearance.
When it comes to the database, the larger it is, the harder it is for the system to identify an individual. This could be the case due to a lack of usage of multi-factor authentication.
Backlash
The bad press and the botched usage of the technology have motivated a massive backlash against facial recognition software. Several NGOs are backing this push in different ways.
Big Brother Watch has been encouraging people to take action and organise demonstrations. Similarly, human rights group Amnesty International launched its Ban the scan campaign to ban facial recognition systems last year. The group said the different biometric systems “violate the right to privacy and threaten the rights to freedom of peaceful assembly and expression.”
The press and activists have forced several tech titans to reconsider their usage of it.
In May, the ICO ordered facial recognition company Clearview AI to pay £7.5m in fines. The regulator said the company had collected over “20 billion images of people’s faces and data from publicly available information on the internet and social media platforms all over the world to create an online database.” This was done without people being informed or given an opportunity to consent.
Facebook has scrapped the tech – deleting over a billion users’ facial recognition templates – when the social media giant rebranded to Meta. The removal was welcomed by digital privacy advocates.
However, Meta did not rule out using the tech for smaller personal authentication purpose. This includes logging in without passwords or regaining access to locked accounts.
Last year, Amazon announced it would be extending a moratorium against facial recognition software made in 2020, until further notice. The moratorium against the company’s facial technology software, Amazon Rekognition, would not be extended to it being used to identify victims of human trafficking.
“We’ve advocated that governments should put in place stronger regulations to govern the ethical use of facial recognition technology, and in recent days, Congress appears ready to take on this challenge,” Amazon said in a statement at the time.
Does facial recognition deserve all the negativity?
Facial recognition providers have taken a beating over the years. However, the question is if they have deserved that battering. For starters, there are legitimate use cases for the technology.
“Facial biometrics have been given a bit of a bad rap because either the industry itself has not done a good job of expressing what it does and how it does what it does,” Russell King, CEO of facial recognition and tech company Xydus, tells Verdict. “Or there’s a fair bit of scare mongering that goes on from interested parties.”
Xydus has supported the NHS with this technology for their login program and Covid passports.
Facial recognition companies like Xydus are not strangers to making the case in public that the technology is not all bad, provided people can give their consent. King says he feels strongly about people’s right to opt in or out of the service.
“[It is] capturing our core identity where we don’t have clarity on what might happen to that information once they finish with it, because they only need it for momentary purpose, “says King. “That must be respected because we can’t change this. It’s permanent and it is part of our identity in a very meaningful part.”
Facial recognition can enhance shopper experience?
King suggests there are many reason to trust verified facial recognition providers, arguing that the technology could help improve people’s daily lives.
“It is an incredibly safe and secure way of providing people with a digitally verified user record, which then they can reuse to authenticate themselves,” King argues. “[Facial recognition] is a contributing factor that will ultimately make people’s lives safer and easier in engaging with digital services.”
Anton Nazarkin, global development director at VisionLabs, says that the Southern Co-op has merely embraced the benefits of facial recognition.
He tells Verdict that it could enhance the customer experience for one. For instance, it could be used to allow customers to pay for their goods without ever having to stop by a till.
“Facial recognition allows customers to enjoy a seamless and quick checkout experience at shops – just like Co-op – an experience which many have embraced,” says Nazarkin.
Technology can be used for right-to-work checks
The use of biometrics can be a positive for people when it comes to security and verification needs.
“It presents an opportunity for people to use a verified credential that removes all the need for passwords and usernames,” says King. “That can only be good because they’re appalling as a security feature.”
This means that companies could use certified facial recognition systems they can trust to work properly, to conduct digital checks.
“It gives everybody clarity that we are engaging with organisations that a third party has certified meet a set of robust conditions,” says King.
Facial recognition can also be used to improve right-to-work checks digitally. King believes an automated approach would be much more transparent for companies to use for these checks. Using facial recognition can provide a higher level of consistency and accuracy in checks.
“There’s no human judgment involved in any of our process. A consistent standard doesn’t exist with humans, computers deliver a very consistent process,” King argues. “But it’s got to be transparent.”
Digital checks for new employees would work by taking an incoming face, on an ID photo or live person. A matching service against all the company’s records is then conducted to see if there’s commonality in those records.
“[That reduces] the organisation’s exposure to potential impersonation fraud, to a very, very limited proposition,” says King.
What is next for facial recognition?
There is clearly a balance to be struck between the risks and the rewards the technology could introduce. It is important to ensure facial recognition is not misused, passed on to third parties or is biased.
“This means creating strong policies and safeguards that are informed by data ethics,” says Cramp. “We need to quickly define the rules of the game and educate the general public about what is at stake.”
Facial recognition is still in its early days, but has clearly already fuelled controversy. Privacy advocates fear that the introduction of the technology will put people’s privacy at risk. They argue that these concerns are amplified in places like the UK, which do not have many laws regarding facial recognition.
The lack of laws mean that it would be harder for charges to be brought against companies. If they misuse biometric data or police use it to mistakenly ID someone, there is no required action or penalty.
The government has been engaged in consultations and testing to develop a set of rules. These rules would help regulate and encourage the use of digital technology. The government’s proposed UK Digital Identities and Attributes Framework still has ways to go.
“[Facial recognition] is still in its relative infancy for commercial use,” says Shanks. “This is an area that the security industry must pay close attention to. The outcome of the Southern Co-op case may provide insight into what future legislation may look like.”