Saturday, 21 July, 2018

Facial recognition technology is "dangerously inaccurate"

Police facial recognition software was wrong 90% of the time Facial recognition software have been found to be wrong over 91% of the time
Nellie Chapman | 16 May, 2018, 15:17

Big Brother Watch's campaign, calling on United Kingdom public authorities to immediately stop using automated facial recognition software with surveillance cameras, is backed by David Lammy MP and 15 rights and race equality groups including Article 19, Football Supporters Federation, Index on Censorship, Liberty, Netpol, Police Action Lawyers Group, the Race Equality Foundation, and Runnymede Trust.

Government minister Susan Williams - who once described the use of AFR as an "operational" decision for the police - said earlier this year the government is to create a board comprised of the information, biometrics and surveillance camera commissioners to oversee the tech.

The privacy group also said that: "automated facial recognition technology is now used by United Kingdom police forces without a clear legal basis, oversight or governmental strategy".

Big Brother Watch submitted freedom of information requests to every police force in the UK.

United Kingdom police facial recognition is lawless, undemocratic, and dangerously inaccurate.

However, the Met Police claimed that this figure is misleading because there is human intervention after the system flags up the match.

Leicestershire Police tested facial recognition in 2015, but is no longer using it at events.

High-definition cameras detect all the faces in a crowd and compare them with existing police photographs, such as mugshots from previous arrests.

The product used by both police forces is called "NeoFace Watch", made by Japanese firm NEC. South Wales police said the technology has made 2,685 matches between May 2017 and March 2018 - but 2,451 were false alarms.

I witnessed the Metropolitan Police use automated facial recognition at Notting Hill Carnival a year ago, and while watching for only five minutes I saw the system wrongly identify two innocent women walking down the street as men on the police's "watch-list".

For instance, a developer of a content filtering AI system may claim that they are able to identify a high percentage of terrorist content on the web, but only when they already know that the content they're analyzing is terrorist content.

"On a much smaller number of occasions, officers went and spoke to the individual. realised it wasn't them, and offered them the opportunity to come and see the van".

"We do not not consider these as false positive matches because additional checks and balances are in place to confirm identification following system alerts", a spokesperson told The Register. One was matched incorrectly on the watch list, and the other was on a mental health-related watch list. "Faces in the video stream that do not generate an alert are deleted immediately".

Civil liberties organisation Big Brother Watch (BBW) published its findings into the Met's use of facial recognition technology in a report that it is set to present to Parliament later today.

Denham also expressed concern with both the transparency and proportionality aspects of the retention of the 19 million images in the Police National Database.

Police facial recognition cameras have already been trialed at large events across the United Kingdom, including football matches and festivals. However, the British people will need to decide whether or not they want to live in a world where they are continuously watched, intrusively surveilled, and biometrically tracked, and think about how that may affect their fundamental rights.

Facial recognition software used by police is highly inaccurate.