The Use of Live Facial Recognition Technology in Scotland: A New North-South Divide?
25 February 2020
Earlier this month, the Scottish Parliament’s Justice Sub-Committee on Policing published a report which concluded that live facial recognition technology is currently “not fit” for use by Police Scotland.
Police Scotland had initially planned to introduce live facial recognition technology (“the technology”) in 2026. However, this has now been called into question as a result of the report’s findings – that the technology is extremely inaccurate, discriminatory, and ineffective. Not only that, but it also noted that the technology would be a “radical departure” from Police Scotland’s fundamental principle of policing by consent.
In light of the above, the Sub-Committee concluded that there would be “no justifiable basis” for Police Scotland to invest in the technology.
Police Scotland agreed – at least for the time being – and confirmed in the report that they will not introduce the technology at this time. Instead, they will engage in a wider debate with various stakeholders to ensure that the necessary safeguards are in place before introducing it. The Sub-Committee believed that such a debate was essential in order to assess the necessity and accuracy of the technology, as well as the potential impact it could have on people and communities.
The report is undoubtedly significant as it reaffirms that the current state of the technology is ineffective. It therefore strengthens the argument that we should have a much wider debate about the technology before we ever introduce it onto our streets. This is important not only on a practical level but also from a human rights perspective, especially set against the backdrop of the technology’s controversial use elsewhere.
What is live facial recognition technology?
By way of background, facial recognition is a general term for technology that can catalogue and recognise the human face. Typically, facial recognition does this by measuring the unique ratios between an individual’s facial features such as the eyes, nose and mouth.
One of the most controversial uses of facial recognition technology is live facial recognition. The latter works by scanning our facial features – without our consent and possibly even our knowledge – to create a unique numerical code. If that numerical code matches an image on the police service’s “watch list”, the live facial recognition system will issue an alert to the police. Anyone can be included in this “watch list” and images can be taken from anywhere, even our social media.
Where is it used?
In the UK, the South Wales Police (“SWP”) and the Metropolitan Police Service (“MET”) have both faced legal challenges as a result of implementing the technology.
In R (Bridges) v Chief Constable of South Wales Police and Secretary of State for the Home Department, for example, it was held that the current legal framework was adequate and that the SWP’s use of the technology was consistent with the requirements of the Human Rights Act 1998 and data protection legislation. However, whilst the judge concluded that the current legal regime was not “out of kilter”, he also noted that the matter would likely require periodic review in the future (see the Blog’s coverage of the judgment here).
Big Brother Watch and Baroness Jenny Jones are also currently considering their next legal steps against the MET following the introduction of the technology in some parts of London. It was first used in Stratford on the 10th of February and is now – as of last Thursday after only 2 hours’ notice on Twitter – being deployed in “key locations” in Westminster.
In light of the above, the Sub-Committee’s report recommended that the Scottish Police Authority should review these legal challenges and consider how to mitigate similar challenges in Scotland.
The technology is not only controversial in the UK. In January, for example, the EU Commission was reportedly considering a 5-year blanket ban on the use of the technology in a leaked draft of a white paper on artificial intelligence. However, this was not mentioned in the final draft of the paper which was published last Wednesday. Instead, the EU Commission committed to carrying out a broad consultation involving Member States, civil society, industry and academics before introducing any concrete proposals to regulate the technology.
Across the Atlantic, several cities – such as San Francisco, Somerville and Oakland – have also banned local law enforcement from using the technology. It is, however, currently being used in cities such as Chicago and Detroit. Earlier this month, US senators also introduced a bill – the Ethical Use of Facial Recognition Act – which would prevent the technology from being used at a federal level if it was enacted. Like the EU Commission’s proposals, the bill would establish a commission which would study the technology and propose guidelines for its use. An important caveat of the bill, however, is that police officers would still be permitted to use the technology if they have first obtained a warrant.
Other countries which have introduced the technology include China, Brazil and India.
The Sub-Committee Report’s Findings
i) The Technology’s Inaccuracy [paras 77-103]
There was substantial evidence in the report which illustrated that the current technology is extremely inaccurate, discriminatory, and ineffective.
Most striking was the evidence exemplifying the “in-built” racial and gender biases inherent in the technology. Such biases exist because the machine-learning algorithms used in the technology inherit the gender, racial and socio-economic biases of their human creators (predominantly Caucasian males). As a result, the machine-learning algorithms are more accurate towards Caucasian male faces.
This finding was not surprising. Indeed, several academic studies have previously shown that the technology disproportionately misidentifies transgender and non-binary people, ethnic minorities, and young people.
Similarly, there was an abundance of statistical evidence in the report which highlighted the technology’s inaccuracy. For example, the accuracy rate of the technology was only 2% when it was used by the MET in 2017, and only 9% when it was used by the SWP. It therefore misidentifies people at a much higher rate than it identifies them, leading to many “false matches” in practice. These will disproportionately discriminate against women and ethnic minorities, both of whom will be more likely to be stopped by the police and required to justify their presence in the surveilled area – even if they are innocently on their way to the supermarket.
The above evidence begs the question: what effectiveness, if any, does the technology have? As we shall see below, that question is an important one from a human rights perspective when assessing the necessity and proportionality of its use.
ii) Necessity and Proportionality [paras 104-117]
The report subsequently considered whether it was necessary for the police service to introduce the technology, whether mass surveillance of the public by the police service was proportionate, and whether the public have trust in its use.
Obtaining public confidence in the technology was first said to be crucial. That meant striking the correct balance between various human rights – namely the right to privacy, freedom of expression, and freedom of association under Articles 8, 10 and 11 of the European Convention on Human Rights (“the ECHR”) – and crime detection. If Police Scotland failed to do this, public confidence in the police service could diminish. Any proposals purporting to introduce the technology should therefore be “transparent and subject to public consultation”.
However, some evidence suggested that, in any event, the use of the technology was not necessary or proportionate and was a breach of Article 8 of the ECHR. This is because the technology indiscriminately scans and captures images of our faces when we are within its view – using our sensitive data – without our consent and sometimes even without our knowledge. Other evidence suggested that our right to a fair trial under Article 6 of the ECHR would also be breached if the technology was ever used by the police service as a “fishing exercise” [see para 113].
That being said, some evidence in the report did suggest that the use of the technology could be proportionate if certain requirements were met. These included:
- a strong legal framework to ensure that it was only used when necessary;
- a narrow use to reduce the number of people being scanned;
- a restriction on the amount of time it can be used for; and
- a data impact assessment to consider its impact on human rights.
In light of this, the Sub-Committee recommended that Police Scotland should properly assess the necessity, proportionality and parameters of the technology’s use before introducing it. They must also demonstrate that there is public consent for its use, which would presumably be obtained through a public consultation on the issue.
Unsurprisingly, the Sub-Committee also emphasised the need for a robust legal and regulatory framework as well as the need for comprehensive human rights, equalities, community impact, data protection and security assessments to be carried out before the technology is ever introduced.
iii) Impact on Human Behaviour [paras 144-160]
The Sub-Committee also heard evidence in relation to the introduction of the technology at football matches, concerts, protests and marches in England and Wales.
Whilst the views of police officers were mixed when asked to discuss the positives of the technology, it was clear that normal members of the public vehemently objected to it. Some individuals who were interviewed, for example, described the technology as “disproportionate, intimidatory, provocative, and counter-productive”.
Evidence highlighted that such views could have a “chilling” impact on individuals’ willingness or ability to exercise their rights to freedom of expression and association. Indeed, an individual may decide against protesting on a particular issue if it means that they will have to surrender their sensitive data to the police via live facial recognition technology. And if we cannot freely exercise these rights without fear of mass surveillance, what use do such rights have?
It appears, then, that there is a new North-South divide in the UK. If you find yourself walking in some parts of London or Wales, for example, live facial recognition technology will now be able to scan your face without consent and you may even be subject to an on-the-spot identity check (particularly if you are a woman or an ethnic minority). In Scotland, however, you will not have to worry about this – at least for now.
Police Scotland will instead engage in a much wider debate on the technology’s use before introducing it. This approach is welcomed as it rejects the notion that live facial recognition technology is an inevitable consequence of technological development and increasing levels of crime. And if the past decade has shown us anything, it is that a hands-off, reactive regulatory framework is an ineffective solution for rapid technological change.
Not only that, but other countries and organisations are also adopting a cautious approach – such as the US and the EU Commission – in order to properly understand the technology. And to better understand the technology is to better protect our human rights, particularly the right to privacy, freedom of expression, and association.
Euan Lynch is currently studying for the Graduate Diploma in Law while teaching English in Madrid and will commence the Legal Practice Course in July.
I would rather have technology that can identify individuals that pose a threat such as terrorists and paedophiles than worry about my right to privacy. I have no right to stop someone in the street taking a photograph with me in it. This is civil liberties gone over the top. When the next terrorist attack happens like Westminster Bridge or Manchester Arena let’s hope the loss of life could not have been prevented by using technology. On your head be it left wing liberals!
You must log in to post a comment.