People on the streets of Tokyo. Image: Jason Ortego/Unsplash

Walk around any city and your face will be caught on camera and might even be added to a facial-recognition database. That data can now be processed in real-time. Regulations about how it can be used are minimal and generally weak.

The military, law-enforcement agencies, and commercial corporations are exploiting facial recognition and artificial intelligence (AI) to collect personal data. Yet, the legal frameworks controlling how that data can be used have not kept pace with the technology.

In May 2019, San Francisco became the first U.S. city to ban the use of facial recognition by its authorities. However, the city ordinance did not prevent private companies from using facial ID in ways that people find objectionable.

In July 2019, the first independent evaluation of the use of facial recognition by London’s Metropolitan police warned it is “highly possible” the system would be ruled unlawful if challenged in court.

The use of facial recognition technology has faced a backlash in Canada. In May 2019, privacy and civil liberties advocates called for an immediate moratorium on the Toronto police’s use of facial recognition technology, saying it was done without public knowledge and without proper checks and balances.

“There is no transparency associated with this,” former information and privacy commissioner of Ontario Ann Cavoukian told the CBC. “And you can’t hold people accountable if you don’t know what’s going on…And while people may not be aware of it, there’s a very high false-positive rate for facial recognition.” Toronto police ran the pilot project from March 2018 to December 2018 and reported that it had been an “immediate success” in terms of identifying criminal offenders and previously unknown suspects.

As face recognition becomes more and more common, there are also growing concerns about the gender and racial bias embedded in many systems. Writing in The Atlantic, Tiffany C. Li, a fellow at Yale Law School’s Information Society Project, puts the onus on tech companies themselves:

“Developers need to go further and build actual privacy protections into their apps. These can include notifications on how data (or photos) are being used, clear internal policies on data retention and deletion, and easy workflows for users to request data correction and deletion. Additionally, app providers and platforms such as Apple, Microsoft, and Facebook should build in more safeguards for third-party apps.”

All well and good, but misuse, misappropriation, and mistaken identity require legislation and regulation that includes better privacy laws that address the potential for harms inherent in these technologies. In Li’s opinion:

“To deal with privacy risks in the larger data ecosystem, we need to regulate how data brokers can use the personal information they obtain. We need safeguards against the practical harms that invasions of privacy can cause; that could mean, for example, limiting the use of facial-recognition algorithms for predictive policing. We also need laws that give individuals power over data they have not voluntarily submitted.”

In short, global corporations play by their own rules and require oversight. The problem is how to guarantee compliance.

Philip Lee is WACC general secretary and editor of its international journal Media Development. His publications include The Democratization of Communication (ed.) (1995), Many Voices, One Vision: The Right to Communicate in Practice (ed.) (2004); Communicating Peace: Entertaining Angels Unawares (ed.) (2008); and Public Memory, Public Media, and the Politics of Justice (ed. with Pradip N. Thomas) (2012).

WACC Global is an international NGO that promotes communication as a basic human right, essential to people’s dignity and community.

Image: Jason Ortego/Unsplash

Philip Lee

Philip Lee is WACC general secretary and editor of its international journal Media Development. His edited publications include The Democratization of Communication (1995), Many Voices, One Vision: The...