August 8, 2022

CITP Case Study on Regulating Facial Recognition Technology in Canada

Canada, like many jurisdictions in the United States, is grappling with the growing usage of facial recognition technology in the private and public sectors. This technology is being deployed at a rapid pace in airports, retail stores, social media platforms, and by law enforcement – with little oversight from the government. 

To help address this challenge, I organized a tech policy case study on the regulation of facial recognition technology with Canadian members of parliament – The Honorable Greg Fergus and Matthew Green. Both sit on the House of Commons’ Standing Committee on Access to Information, Privacy, and Ethics (ETHI) Committee and I served as a legislative aide to them through the Parliamentary Internship Programme before joining CITP. Our goal for the session was to put policymakers in conversation with subject matter experts. 

The core problem is that there is lack of accountability in the use of facial recognition technology that excarbates historical forms of discrimination and puts marginalized communities at risk for a wide range of harms. For instance, a recent story describes the fate of three black men who were wrongfully arrested because of being misidentified by facial recognition software. As the Canadian Civil Liberties Association argues, the police’s use of facial recognition technology, notably provided by the New York-based company, Clearview AI, “points to a larger crisis in police accountability when acquiring and using emerging surveillance tools.

A number of academics and researchers – such as DAIR Instititute’s Timnit Gebru and the Algorithmic Justice League’s Joy  Buolamwini, who documented the missclassification of darker-skinned women in a recent paper – are bringing attention to the discriminatory algorithms associated with facial recognition that have put racialized people, women, and members of the LGBTIQ community, at greater risk of false identification.  

Meanwhile, Canadian officials are beginning to tackle the real world consequences of the use of facial recognition. A year ago, the Office of the Privacy Commissioner found that Clearview AI, had scraped billions of images of people from from the internet in what “represented mass surveillance and was a clear violation of the privacy rights of Canadians.” 

Following that investigation, Clearview AI stopped providing services to the Canadian market, including the Royal Canadian Mounted Police. In light of these findings and the absence of dedicated legislation, the ETHI Committee began studying the uses of facial recognition technology in May 2021, and has recently resumed this work by focusing on the use by various levels of government in Canada, law enforcement agencies, and private corporations. 

The CITP case study session on March 24, began with a presentation by Angelina Wang, a graduate affiliate of CITP, who provided a technical overview where she explained the different functions and harms associated with this technology. Following Wang’s presentation, I provided a regulatory overview of how U.S. lawmakers have addressed facial recognition by noting the different legislative strategies deployed for law enforcement, private, and public sector uses. We then had a substantive, free-flowing discussion with CITP researchers and the policymakers about the challenges and opportunities for different regulatory strategies. 

Following CITP’s case study session, Wang and Dr. Elizabeth Anne Watkins, a CITP Fellow, were invited to testify before the ETHI committee in an April 4 hearing. Wang discussed the different tasks facial recognition technology can and cannot perform, how the models are created, why they are susceptible to adversarial attacks, and the ethical implications behind the creation of this technology. Dr. Watkins’ testimony provided an overview of the privacy, security, and safety concerns related to the private industry’s use of facial verification on workers as informed by her research.  The committee is expected to report its findings by the end of May 2022. 

We continue to do research on how Canada might regulate facial recognition technology and will publish those analyses in the coming months.

Collateral Freedom in China

OpenITP has just released a new report—Collateral Freedom—that studies the state of censorship circumvention tool usage in China today. From the report’s overview:

This report documents the experiences of 1,175 Chinese Internet users who are circumventing their country’s Internet censorship—and it carries a powerful message for developers and funders of censorship circumvention tools. We believe these results show an opportunity for the circumvention tech community to build stable, long term improvements in Internet freedom in China.

The circumvention tools that work best for these users are technologically diverse, but they are united by a shared political feature: the collateral cost of choosing to block them is prohibitive for China’s censors. Our survey respondents are relying not on tools that the Great Firewall can’t block, but rather on tools that the Chinese government does not want the Firewall to block. Internet freedom for these users is collateral freedom, built on technologies and platforms that the regime finds economically or politically indispensable.

Download the full report here: http://openitp.org/?q=node/44

The study was conducted by CITP alums David Robinson and me, along with Anne An. It was managed by OpenITP, and supported by Radio Free Asia’s Open Technology Fund. We wrote it primarily for developers and funders of censorship circumvention technology projects, but it is also designed to be accessible for non-technical policymakers who are interested in Internet freedom, and for China specialists without technology background.

On kids and social networking

Sunday’s New York Times has an article about cyber-bullying that’s currently #1 on their “most popular” list, so this is clearly a topic that many find close and interesting.

The NYT article focuses on schools’ central role in policing their students social behavior. While I’m all in favor of students being taught, particularly by older peer students, the importance of self-moderating their communications, schools face a fundamental quandary:

Nonetheless, administrators who decide they should help their cornered students often face daunting pragmatic and legal constraints.

“I have parents who thank me for getting involved,” said Mike Rafferty, the middle school principal in Old Saybrook, Conn., “and parents who say, ‘It didn’t happen on school property, stay out of my life.’ ”

Judges are flummoxed, too, as they wrestle with new questions about protections on student speech and school searches. Can a student be suspended for posting a video on YouTube that cruelly demeans another student? Can a principal search a cellphone, much like a locker or a backpack?

It’s unclear. These issues have begun their slow climb through state and federal courts, but so far, rulings have been contradictory, and much is still to be determined.

Here’s one example that really bothers me:

A few families have successfully sued schools for failing to protect their children from bullies. But when the Beverly Vista School in Beverly Hills, Calif., disciplined Evan S. Cohen’s eighth-grade daughter for cyberbullying, he took on the school district.

After school one day in May 2008, Mr. Cohen’s daughter, known in court papers as J. C., videotaped friends at a cafe, egging them on as they laughed and made mean-spirited, sexual comments about another eighth-grade girl, C. C., calling her “ugly,” “spoiled,” a “brat” and a “slut.”

J. C. posted the video on YouTube. The next day, the school suspended her for two days.

“What incensed me,” said Mr. Cohen, a music industry lawyer in Los Angeles, “was that these people were going to suspend my daughter for something that happened outside of school.” On behalf of his daughter, he sued.

If schools don’t have the authority to discipline J. C., as the court apparently ruled, and her father is more interested in defending her than disciplining her for clearly inappropriate behavior, then can we find some other solution?

Of course, there’s nothing new about bullying among the early-teenage set. I will refrain from dredging such stories from my own pre-Internet pre-SMS childhood, but there’s no question that these kids are at an important stage of their lives, where they’re still learning important and essential concepts, like how to relate to their peers and the importance (or lack thereof) of their peers’ approval, much less understanding where to draw boundaries between their public self and their private feelings. It’s certainly important for us, the responsible adults of the world, to recognize that nothing we can say or do will change the fundamentally social awkwardness of this age. There will never be an ironclad solution that eliminates kids bullying, taunting, or otherwise hurting one other.

Given all that, the rise of electronic communications (whether SMS text messaging, Facebook, email, or whatever else) changes the game in one very important way. It increases the velocity of communications. Every kid now has a megaphone for reaching their peers, whether directly through a Facebook posting that can reach hundreds of friends at once or indirectly through the viral spread of embarrassing gossip from friend to friend, and that speed can cause salacious information to get around well before any traditional mechanisms (parental, school administrative, or otherwise) can clamp down and assert some measure of sanity. For possibly the ultimate example of this, see a possibly fictitious yet nonetheless illustrative girl’s written hookup list posted by her brother as a form of revenge against her ratting out his hidden stash of beer. Needless to say, in one fell swoop, this girl’s life got turned upside down with no obvious way to repair the social damage.

Alright, we invented this social networking mess. Can we fix it?

The only mechanism I feel is completely inappropriate is this:

But Deb Socia, the principal at Lilla G. Frederick Pilot Middle School in Dorchester, Mass., takes a no-nonsense approach. The school gives each student a laptop to work on. But the students’ expectation of privacy is greatly diminished.

“I regularly scan every computer in the building,” Ms. Socia said. “They know I’m watching. They’re using the cameras on their laptops to check their hair and I send them a message and say: ‘You look great! Now go back to work.’ It’s a powerful way to teach kids: ‘I’m paying attention, you need to do what’s right.’ ”

Not only do I object to the Big Brother aspect of this (do schools still have 1984 on their reading lists?), but turning every laptop into a surveillance device is a hugely tempting target for a variety of bad actors. Kids need and deserve some measure of privacy, at least to the extent that schools already give kids a measure of privacy against arbitrary and unjustified search and seizure.

Surveillance is widely considered to be more acceptable when it’s being done by parents, who might insist they have their kids’ passwords in order to monitor them. Of course, kids of this age will reasonably want or need to have privacy from their parents as well (e.g., we don’t want to create conditions where victims of child abuse can be easily locked down by their family).

We could try to invent technical means to slow down the velocity of kids’ communications, which could mean adding delays as a function of the fanout of a message, or even giving viewers of any given message a kill switch over it, that could reach back and nuke earlier, forwarded copies to other parties. Of course, such mechanisms could be easily abused. Furthermore, if Facebook were to voluntarily create such a mechanism, kids might well migrate to other services that lack the mechanism. If we legislate that children of a certain age must have technically-imposed communication limits across the board (e.g., limited numbers of SMS messages per day), then we could easily get into a world where a kid who hits a daily quota cannot communicate in an unexpectedly urgent situation (e.g., when stuck at an alcoholic party and needing a sober ride home).

Absent any reasonable technical solution, the proper answer is probably to restrict our kids’ access to social media until we think they’re mature enough to handle it, to make sure that we, the parents, educate them about the proper etiquette, and that we take responsibility for disciplining our kids when they misbehave.