The killing of George Floyd in Minneapolis has led to calls to de-fund, reform or completely dismantle the police in the United States. Such discussions have also been amplified here in Canada.
With the public gaze now facing law enforcement agencies and their role in systemic racism, mainstream assumptions about the police are being challenged, including the view constructed by popular culture that portrays police as crime-fighting heroes who represent the thin blue line between order and chaos.
An important aspect of re-imagining the police — though receiving less attention in Canada — is addressing their expansion of high-tech surveillance technologies.
We live in a data-driven society. As we go about our daily lives, we leave behind digital footprints. Many are becoming aware that our searches on Google, our interactions through social media, our visits to friends and family, and our credit card purchases are collected and analyzed by corporations to provide more information about us, including where we’ve been and what we like.
This awareness is partly driven by people’s increased knowledge of social media companies’ business models that rely on algorithms to direct users to content and promote targeted advertisements.
What may have started with corporations profiling and making decisions about “consumers” to maximize profits has evolved into governments around the world increasingly collecting and analyzing digital trails as well.
Although such practices in certain applications may be used for the benefit of individuals or wider society, in other areas, including criminal justice and law enforcement, they have proven controversial in recent years.
The proliferation of data-driven technologies have led to what American law professor Andrew Ferguson calls “big data policing.” Research reveals how such practices increase digital surveillance, posing a growing threat to civil liberties, and exacerbating bias, overreach and abuse in policing.
Although discussions about the expansion of digital surveillance technologies within law enforcement have primarily arisen from the United States and Europe, the extent of these practices in Canada remains scantly explored. Vancouver, British Columbia, and London, Ontario have reportedly adopted software that uses data to try to predict where and when property crime is likely to take place.
Police agencies remain secretive over their use of surveillance technologies that invisibly pierce through boundaries designed to protect liberty.
For instance, only after documents leaked did police agencies in Canada admit to using Clearview AI’s facial recognition software, with the RCMP initially completely denying but later admitting to its use. Lying tarnishes the public image of police and erodes trust, which becomes very difficult and time-consuming to rebuild.
Amid a privacy probe, Clearview AI indefinitely suspended its dealings with Canadian police agencies. However, some police agencies, including Peel and York, are likely still procuring facial-recognition technology.
Similarly, when news broke about American police using the data-mining software MediaSonar to track keywords on social media, including “Black Lives Matter,” documents showed Canadian police agencies were also using the software, though many chose not to publicly confirm its use.
The RCMP have too been involved in wide-scale monitoring of social media activity, known as Project Wide Awake, which expanded to proactively identify potential crimes online before they occur.
Police in Ontario and Saskatchewan have been using a “risk-driven tracking database,” which involves sharing information on vulnerable groups of people between police, schools, health care workers and social workers to track “negative behaviour” and identify those potentially “at-risk.”
Increased interconnectedness of technologies is expected to grow through the internet of things, expanding the potential of surveillance. If left unchecked, there is a risk that this flow of information between devices will find its way to law enforcement. Smart watches, smart cars, smart appliances and devices like Google Home or Amazon Alexa provide further ways to collect and analyze information on people.
Countless studies, inquires and commissions on Canadian policing have identified Black and Indigenous peoples as disproportionately vulnerable to police surveillance and violence. Whether it be facial recognition, social media analysis, predictive policing software or analyzing the risk of “negative behaviours,” these technologies rely on algorithms.
Research has shown that algorithms harbor biases against disadvantaged groups, reinforcing structural discrimination and deepening social inequality. Recognizing their potential consequences, New Zealand recently announced it will be setting standards for how public agencies use algorithms, including requiring them to identify any biases within them.
The adoption of data-driven technologies that expand the scope and depth of surveillance by police therefore moves beyond privacy and involves questions related to our democracy, including human dignity and the right to be free from discrimination. This makes the issue a concern for all Canadians.
The killing of George Floyd sparked renewed outrage and activism against institutional racism and police brutality. Though often shrouded by notions of tech-neutrality, objectivity, efficiency and progress, uncovering and re-structuring the silent and invisible role that surveillance technologies play within these institutions must be an important part of the path forward.
Joe Masoodi is a policy analyst on technology, cybersecurity and democracy at the Ryerson Leadership Lab at Ryerson University. He has previously conducted research at the Surveillance Studies Centre at Queen’s University on surveillance, technology and policing, and has completed degrees from the Royal Military College of Canada and Queen’s University.
Image: Lianhao Qu/Unsplash