July 16, 2018

How Tech is Failing Victims of Intimate Partner Violence: Thomas Ristenpart at CITP

What technology risks are faced by people who experience intimate partner violence? How is the security community failing them, and what questions might we need to ask to make progress on social and technical interventions?

Speaking Tuesday at CITP was Thomas Ristenpart (@TomRistenpart), an associate professor at Cornell Tech and a member of the Department of Computer Science at Cornell University. Before joining Cornell Tech in 2015, Thomas was an assistant professor at the University of Wisconsin-Madison. His research spans a wide range of computer security topics, including digital privacy and safety in intimate partner violence, alongside work on cloud computing security, confidentiality and privacy in machine learning, and topics in applied and theoretical cryptography.

Throughout this talk, I found myself overwhelmed by the scope of the challenges faced by so many people– and inspired by the way that Thomas and his collaborators have taken thorough, meaningful steps on this vital issue.

Understanding Intimate Partner Violence

Intimate partner violence (IPV) is a huge problem, says Thomas. 25% of women and 11% of men will experience rape, physical violence, and/or stalking by an intimate partner, according to the National Intimate Partner and Sexual Violence Survey. To put this question in context for tech companies, this means that 360 million Facebook users and 252 million Android users will experience this kind of violence.

Prior research over the years has shown that abusers are taking advantage of technology to harm victims in a wide range of ways, including spyware, harassment, and non-consensual photography. In a team with Nicki Dell, Diana FreedKaren Levy, Damon McCoy, Rahul Chatterjee, Peri Doerfler, and Sam Havron, Thomas and his collaborators have working with the New York City Mayor’s office to Combat Domestic Violence (NYC CDV).

To start, the researchers spent a year doing qualitative research with people who experience domestic violence. The research that Thomas is sharing today draws from that work.

The research team worked with the New York City Family Justice Centers, who offer a range of services for domestic violence, sex trafficking, and elder abuse victims– from civil and legal services to access to shelters, counseling, and support from nonprofits. The centers were a crucial resource for the researchers, since they connect nonprofits, government actors, and survivors and victims. Over seriesof year-long qualitative studies (see also this paper), researchers held 11 focus groups with 39 women who speak English and Spanish from 18-165. Most of them are no longer working with the abusive partner. They also held semi-structured interviews with 50 professionals working on IPV– case managers, social workers, attorneys/paralegals, and police officers. Together, this research represents the largest and most demographically diverse study to date on IPV.

Common Technology Attacks in Intimate Partner Violence Situations

The researchers spotted a range of common themes across clients of the NYC CDV. They talked about stalkers who accessed their phones and social media, installed spyware, took compromising images through the spyware, and then impersonating them to use the account to send compromising, intimate images to employers, family, and friends. Abusers are taking advantage of every possible technology to create problems through many modes. Overall, they identified four kinds of common attacks:

  • In ownership-based attacks, the abuser owns the account that the victim is using. This gives them immediate access to controlling the device. Often people will buy a device for someone else to gain a foothold in that person’s life and home.
  • In account/device compromise, someone compels, guesses, or otherwise compromises passwords.
  • Harmful messages or posts involve calling/texting/messaging the victim. This involves harassing a victim’s friends/family, and sometimes encouraging other people to harass that person by proxy.
  • Abusers also exposed private information: blackmailing someone by threat of exposure, sharing non-consensual intimate images, and creating fake profiles/advertisements for that person on other sites.

In many of these cases, abusers are re-purposing ordinary software for some kind of unhelpful purpose. For example, abusers use two-factor authentication to prevent victims from accessing and recovering access to their own account.

Non-Technical Infrastructures Aren’t Helping Victims & Professionals with Technical Issues

Thomas tells us that despite these risks, they didn’t find a single technologist in the network of support for people facing intimate partner violence. So it’s not surprising that these services don’t have any best practices for evaluating technology risks. On top of that, victims overwhelmingly report having insufficient technology understanding to deal with tech abuse.

Abusers are typically considered to be “more tech-savvy” than victims, and professionals overwhelmingly report having insufficient technology understanding to help with tech abuse. Many of them just google as they go.

Thomas also points out that the intersection of technology and intimate partner violence raises important legal and policy issues. First, digital abuse is usually not recognized as a form of abuse that warrants a protection order. When someone goes to a family court, they have to convince a judge to get a protection order- and judges aren’t convinced by digital harassment– even though the protection order can legally restrict an abuser from sending the message. Second, when an abuser creates a fake account on a site like Tinder and creates “come rape me” style ads, the abuser is technically the legal owner of the account, so it can be difficult to take down the ads, especially for smaller websites that don’t respond to copyright takedown requests.

Technical Mechanisms are Failing Too: Context Undermines Existing Security Systems

Abusers aren’t the sophisticated cyber-operatives that people sometimes talk about at security conferences. Instead, researchers saw two classes of attacks: (a) UI-bound adversaries: an adversarial but authenticated user who interacts with the system via the normal user interface, and (b) Spyware adversaries, who installs/repurposes commodity software for surveillance of the victim. Neither of these require technical sophistication.

Why are these so effective? Thomas says that the reason is that the threat models and the assumptions in the security world don’t match threats. For example, many systems are designed to protect from a stranger on the internet who doesn’t know the victim personally and connects from elsewhere. With intimate partner violence, the attacker knows the victim personally, they can guess or compel disclosure, they may connect from the victim’s computer or same home, and may own the account or device that’s being used. The abuser is often an earner who pays for accounts and devices.

The same problems apply with fake accounts and detection of abusive content. Many fake social media profiles obviously belong to the abuser but survivors are rarely able to prove it. When abusers send hurtful, abusive messages, someone who lacks the content may not be able to detect it. Outside of the context of IPV, a picture of a gun might be just a picture of a gun- but in context, it can be very threatening.

Common Advice Also Fails Victims

Much of the common advice just won’t work. Sometimes people are urged to delete their account. You can’t just shut off contact with an abuser- you might be legally obligated to communicate (shared custody of children). You can’t get new devices because the abuser pays for phones, family plan, and/or children’s devices (which is a vector of surveillance). People can’t necessarily get off social media, because they need it to get access to their friends and family. On top of that, any of these actions could escalate abuse; victims are very worried about cutting off access or uninstalling spyware because they’re worried about further violence from the abuser.

Many Makers of Spyware Promote their Software for Intimate Partner Surveillance

Next, Thomas tells us about intimate partner surveillance (IPS) from a new paper led by Diana Freed on How Intimate Partner Abusers Exploit Technology. Shelters and family justice centers have had problems where someone shows up with software on their phone that allowed the abuser to track them, kick down a door, and endanger the victim. No one could name a single product that was used by abusers, partly because our ability to diagnose spyware from a technical perspective is limited. On the other hand, if you google “track my girlfriend,” you will find a host of companies that are peddling spyware.

To study the range of spyware systems, Thomas and his colleagues used “snowball” searching and used auto-complete to look for other queries that other people were searching. From a set of roughly 27k urls, they investigated 100 randomly sampled URLs. They found that 60% were related to intimate partner surveillance: how-to blogs, Q&A forums, news articles, app websites, and links to apps on the Google Play Store and the Apple App Store. Many of the professional-grade spyware providers provide apps directly through app stores, as well as “off-store” apps. They labeled a thousand of the apps they found and discovered that about 28% of them were potential IPS tools.

The researchers found overt tools for intimate partner surveillance apps, as well as systems for safety, theft-tracking, child tracking, and employee tracking that were repurposed for abuse. In many cases, it’s hard to point to a single piece of software and say that it’s bad. While apps sometimes purport to provide services to parents to track children, searches for intimate partner violence also surface paid ads to products that don’t directly claim to be for use within intimate partners. Ever since a ruling from the FTC, companies work to preserve plausible deniability.

In an audit study the researchers emailed customer support for 11 apps (on-store and off-store) posing as an abuser. They received nine responses. Eight of them condoned intimate partner violence and gave them advice on making the app hard to find. Only one indicated that it could be illegal.

Many of these systems have rich capabilities: location tracking, texts, call recordings, media contents, app usage, internet activity logs, keylogging, geographic tracking. All of the off-store systems have covert features to hide the fact that the app is installed. Even some of the Google Play Store apps have features to make the apps covert.

Early Steps for Supporting Victims: Detecting Spyware

What’s the current state of the art? Right now, practitioners tell people that if your battery runs unusually low, they may be a victim of spyware– not very effective. Do spyware removal tools work? They had high but not perfect detection rates for off-store intimate-purpose surveillance systems. However they did a poor job at detecting on-store spyware tools.

 

Thomas recaps what they learned from this study: There’s a large ecosystem of spyware apps, the dual use of these apps creates a significant challenge, many developers are condoning intimate partner surveillance, and existing anti-spyware technologies are insufficient at detecting tools.

Based on this work, Thomas and his collaborators are working with the NYC Mayor’s office and the National Network to end Domestic Violence to develop ways to detect spyware, to develop new surveys of technology risks, and find new kinds of interventions.

Thomas concludes with an appeal to companies and computer scientists that we pay more attention to the needs of the most vulnerable people affected by our work, volunteer for organizations that support victims, and develop new approaches to protect people in these all-too-common situations.

(Mis)conceptions About the Impact of Surveillance

Does surveillance impact behavior? Or is its effect, if real, only temporary or trivial? Government surveillance is back in the news thanks to the so-called “Nunes memo”, making this is a perfect time to examine new research on the impact of surveillance. This includes my own recent work, as my doctoral research at the Oxford Internet Institute, University of Oxford  examined “chilling effects” online, that is, how online surveillance, and other regulatory activities, may impact, chill, or deter people’s activities online.

Though the controversy surrounding the Nunes memo critiquing FBI surveillance under the Foreign Intelligence Surveillance Act (FISA) is primarily political, it takes place against the backdrop of the wider debate about Congressional reauthorization of FISA’s Section 702, which allows the U.S. Government to intercept and collect emails, phone records, and other communications of foreigners residing abroad, without a warrant. On that count, civil society groups have expressed concerns about the impact of government surveillance like that available under FISA, including “chilling effects” on rights and freedoms. Indeed, civil liberties and rights activists have long argued, and surveillance experts like David Lyon long explained, that surveillance and similar threats can have these corrosive impacts.

Yet, skepticism about such claims is common and persistent. As Kaminski and Witov recently noted, many “evince skepticism over the effects of surveillance” with deep disagreements over the “effects of surveillance” on “intellectual queries” and “development”.  But why?  The answer is complicated but likely lies in the present (thin) state of research on these issues, but also common conceptions, and misconceptions, about surveillance and impact on people and broader society.

Skepticism and assumptions about impact
Skepticism about surveillance impacts like chilling effects is, as noted, persistent with commentators like Stanford Law’s Michael Sklansky insisting there is “little empirical support” for chilling effects associated with surveillance or Leslie Kendrick, of UVA Law, labeling the evidence supporting such claims “flimsy” and calling for more systematic research on point. Part of the problem is precisely this: the impact of surveillance—both mass and targeted forms—is difficult to document, measure, and explore, especially chilling effects or self-censorship. This is because demonstrating self-censorship or chill requires showing a counterfactual state of affairs: that a person would have said something or done something but for some surveillance threat or awareness.

But another challenge, just as important to address, concerns common assumptions and perceptions as to what surveillance impact or chilling effects might look like. Here, both members of the general public as well as experts, judges, and lawyers often assume or expect surveillance to have obvious, apparent, and pervasive impact on our most fundamental democratic rights and freedoms—like clear suppression of political speech or the right to peaceful assembly.

A great example of this assumption, leading to skepticism about whether surveillance may promote self-censorship or have broader societal chilling effects—is here expressed by University of Chicago Law’s Eric Posner. Posner, a leading legal scholar who also incorporates empirical methods in his work, conveys his skepticism about the “threat” posed by National Security Agency (NSA) surveillance in a New York Times “Room for Debate”  discussion, writing:

This brings me to another valuable point you made, which is that when people believe that the government exercises surveillance, they become reluctant to exercise democratic freedoms. This is a textbook objection to surveillance, I agree, but it also is another objection that I would place under “theoretical” rather than real.  Is there any evidence that over the 12 years, during the flowering of the so-called surveillance state, Americans have become less politically active? More worried about government suppression of dissent? Less willing to listen to opposing voices? All the evidence points in the opposite direction… It is hard to think of another period so full of robust political debate since the late 1960s—another era of government surveillance.

For Posner, the mere existence of “robust” political debate and activities in society is compelling evidence against claims about surveillance chill.

Similarly, Sklansky argues not only that there is “little empirical support” for the claim that surveillance would “chill independent thought, robust debate, personal growth, and intimate friendship”— what he terms “the stultification thesis”—but like Posner, he finds persuasive evidence against the claim “all around us”. He cites, for example, the widespread “sharing of personal information” online (which presumably would not happen if surveillance was having a dampening effect); how employer monitoring has not deterred employee emailing nor freedom of information laws deterred “intra-governmental communications”; and how young people, the “digital natives” that have grown up with the internet, social media, and surveillance, are far from stultified and conforming but arguably even more personally expressive and experimental than previous generations.  In light of all that, Sklansky dismisses surveillance chill as simply not “worth worrying about”.

I sometimes call this the “Orwell effect”—the common assumption, likely thanks to the immense impact Orwell’s classic novel 1984 has had on popular culture, that surveillance will have dystopian societal impact, with widespread suppression of personal sharing, expression, and political dissent. When Posner and Sklansky (and others that share these common expectations) do not see these more obvious and far reaching impacts, they then discount more subtle and less apparent impacts and effects that may, over the long term, be just as concerning for democratic rights and freedoms. Of course, theorists and scholars like Daniel Solove have long interrogated and critiqued Orwell’s impact on our understanding of privacy and Sklansky is himself wary of Orwell’s influence, so it is no surprise his work also shapes common beliefs and conceptions about the impact of surveillance.  That influence is compounded by the earlier noted lack of systematic empirical research providing more grounded insights and understanding.

This is not only an academic issue. Government surveillance powers and practices are often justified with reference to other national security concerns and threats like terrorism, as this House brief on the FISA re-authorization illustrates. If concerns about chilling effects associated with surveillance and other negative impacts are minimized or discounted based on misconceptions or thin empirical grounding, then challenging surveillance powers and their expansion is much more difficult, with real concrete implications for rights and freedoms.

So, the challenge for documenting, exploring, and understanding the impact of surveillance is really two-fold. The first is one of research methodology and design: designing research to document the impact of surveillance, and a second concerns common assumptions and perceptions as to what surveillance chilling effects might look like—with even experts like Posner or Sklansky assuming widespread speech suppression and conformity due to surveillance.

New research, new insights
Today, new systematic empirical research on the impact of surveillance is being done, with several recent studies having documented surveillance chilling effects in different contexts, including recent studies by  Stoycheff [1], Marthews and Tucker [2], as well as my own recent research.  This includes an empirical legal study[3] on how the Snowden revelations about NSA surveillance impacted Wikipedia use—which received extensive media coverage in the U.S. and internationally— and a more recent study[4], which I wrote about recently in Slate, that examined among other things how state and corporate surveillance impact or “chill” certain people or groups differently. A lot of this new work was not possible in previous times, as it is based on new forms of data being made available to researchers and insights gleaned from analyzing public leaks and disclosures concerning surveillance like the Snowden revelations.

The story these and other new studies tell when it comes to the impact of surveillance is more complicated and subtle, suggesting the common assumptions of Posner and Sklansky are actually misconceptions. Though more subtle, these impacts are no less concerning and corrosive to democratic rights and freedoms, a point consistent with the work of surveillance studies theorists like David Lyon[5] and warnings from researchers at places like the Citizen Lab[6], Berkman Klein Center[7], and here at the CITP[8].  In subsequent posts, I will discuss these studies more fully, to paint a broader picture of surveillance effects today and, in light of increasingly sophisticated targeting and emerging automation technologies, tomorrow. Stay tuned.

* Jonathon Penney is a Research Affiliate of Princeton’s CITP, a Research Fellow at the Citizen Lab, located at the University of Toronto’s Munk School of Global Affairs, and teaches law as an Assistant Professor at Dalhousie University. He is also a research collaborator with Civil Servant at the MIT Media Lab. Find him on twitter at @jon_penney

[1] Stoycheff, E. (2016). Under Surveillance: Examining Facebook’s Spiral of Silence Effects in the Wake of NSA Internet Monitoring. Journalism & Mass Communication Quarterly. doi: 10.1177/1077699016630255

[2] Marthews, A., & Tucker, C. (2014). Government Surveillance and Internet Search Behavior. MIT Sloane Working Paper No. 14380.

[3] Penney, J. (2016). Chilling Effects: Online Surveillance and Wikipedia Use. Berkeley Tech. L.J., 31, 117-182.

[4] Penney, J. (2017). Internet surveillance, regulation, and chilling effects online: A comparative case study. Internet Policy Review, forthcoming

[5] See for example: Lyon, D. (2015). Surveillance After Snowden. Cambridge, MA: Polity Press; Lyon, D. (2006). Theorizing surveillance: The panopticon and beyond. Cullompton, Devon: Willan Publishing; Lyon, D. (2003). Surveillance After September 11. Cambridge, MA: Polity. See also Marx, G.T., (2002). What’s New About the ‘New Surveillance’? Classifying for Change and Continuity. Surveillance & Society, 1(1), pp. 9-29;  Graham, S. & D. Wood. (2003). Digitising Surveillance: Categorisation, Space, Inequality, Critical Social Policy, 23(2): 227-248.

[6] See for example, recent works: Parsons, C., Israel, T., Deibert, R., Gill, L., and Robinson, B. (2018). Citizen Lab and CIPPIC Release Analysis of the Communications Security Establishment Act. Citizen Lab Research Brief No. 104, January 2018; Parsons, C. (2015). Beyond Privacy: Articulating the Broader Harms of Pervasive Mass Surveillance. Media and Communication, 3(3), 1-11; Deibert, R. (2015). The Geopolitics of Cyberspace After Snowden. Current History, (114) 768 (2015): 9-15; Deibert, R. (2013) Black Code: Inside the Battle for Cyberspace, (Toronto: McClelland & Stewart).  See also

[7] See for example, recent work on the Surveillance Project, Berkman Klein Center for Internet and Society, Harvard University.

[8] See for example, recent work: Su, J., Shukla, A., Goel, S., Narayanan, A., De-anonymizing Web Browsing Data with Social Networks. World Wide Web Conference 2017; Zeide, E. (2017). The Structural Consequences of Big Data-Driven Education. Big Data. June 2017, 5(2): 164-172, https://doi.org/10.1089/big.2016.0061;MacKinnon, R. (2012) Consent of the networked: The worldwide struggle for Internet freedomNew YorkBasic Books.; Narayanan, A. & Shmatikov, V. (2009). See also multiple previous Freedom to Tinker posts discussing research/issues point.

 

AdNauseam, Google, and the Myth of the “Acceptable Ad”

Earlier this month, we (Helen Nissenbaum, Mushon Zer-Aviv, and I), released a new and improved AdNauseam 3.0. For those not familiar, AdNauseam is the adblocker that clicks every ad in an effort to obfuscate tracking profiles and inject doubt into the lucrative economic system that drives advertising-based surveillance. The 3.0 release contains some new features we’ve been excited to discuss with users and critics, but the discussion was quickly derailed when we learned that Google had banned AdNauseam from its store, where it had been available for the past year. We also learned that Google has disallowed users from manually installing or updating AdNauseam on Chrome, effectively locking them out of their own saved data, all without prior notice or warning.

Whether or not you are a fan of AdNauseam’s strategy, it is disconcerting to know that Google can quietly make one’s extensions and data disappear at any moment, without so much as a warning. Today it is a privacy tool that is disabled, but tomorrow it could be your photo album, chat app, or password manager. You don’t just lose the app, you lose your stored data as well: photos, chat transcripts, passwords, etc. For developers, who, incidentally, must pay a fee to post items in the Chrome store, this should cause one to think twice. Not only can your software be banned and removed without warning, with thousands of users left in the lurch, but all comments, ratings, reviews, and statistics are deleted as well.

When we wrote Google to ask the reason for the removal, they responded that AdNauseam had breached the Web Store’s Terms of Service, stating that “An extension should have a single purpose that is clear to users”[1]. However, the sole purpose of AdNauseam seems readily apparent to us—namely to resist the non-consensual surveillance conducted by advertising networks, of which Google is a prime example. Now we can certainly understand why Google would prefer users not to install AdNauseam, as it opposes their core business model, but the Web Store’s Terms of Service do not (at least thus far) require extensions to endorse Google’s business model. Moreover, this is not the justification cited for the software’s removal.

So we are left to speculate as to the underlying cause for the takedown. Our guess is that Google’s real objection is to our newly added support for the EFF’s Do Not Track mechanism[2]. For anyone unfamiliar, this is not the ill-fated DNT of yore, but a new, machine-verifiable (and potentially legally-binding) assertion on the part of websites that commit to not violating the privacy of users who choose to send the DNT header. A new generation of blockers including the EFF’s Privacy Badger, and now AdNauseam, have support for this mechanism built-in, which means that they don’t (by default) block ads and other resources from DNT sites, and, in the case of AdNauseam, don’t simulate clicks on these ads.

So why is this so threatening to Google? Perhaps because it could represent a real means for users, advertisers, and content-providers to move away from surveillance-based advertising. If enough sites commit to Do Not Track, there will be significant financial incentive for advertisers to place ads on those sites, and these too will be bound by DNT, as the mechanism also applies to a site’s third-party partners. And this could possibly set off a chain reaction of adoption that would leave Google, which has committed to surveillance as its core business model, out in the cold.

But wait, you may be thinking, why did the EFF develop this new DNT mechanism when there is AdBlock Plus’ “Acceptable Ads” programs, which Google and other major ad networks already participate in?

That’s because there are crucial differences between the two. For one, “Acceptable Ads” is pay-to-play; large ad networks pay Eyeo, the company behind Adblock Plus, to whitelist their sites. But the more important reason is that the program is all about aesthetics—so-called “annoying” or “intrusive” ads—which the ad industry would like us to believe is the only problem with the current system. An entity like Google is fine with “Acceptable Ads” because they have more than enough resources to pay for whitelisting[3] . Further, they are quite willing to make their ads more aesthetically acceptable to users (after all, an annoyed user is unlikely to click)[4]. What they refuse to change (though we hope we’re wrong about this) is their commitment to surreptitious tracking on a scale never before seen. And this, of course, is what we, the EFF, and a growing number of users find truly “unacceptable” about the current advertising landscape.

 

[1]  In the one subsequent email we received, a Google representative stated that a single extension should not perform both blocking and hiding. This is difficult to accept at face value as nearly all ad blockers (including uBlock, Adblock Plus, Adblock, Adguard, etc., all of which are allowed in the store) also perform blocking and hiding of ads, trackers, and malware. Update (Feb 17, 2017): it has been a month since we have received any message from Google despite repeated requests for clarification, and despite the fact that they claim, in a recent Consumerist article, to be “in touch with the developer to help them resubmit their extension to get included back in the store.”

[2] This is indeed speculation. However, as mention in [1], the stated reason for Google’s ban of AdNauseam does not hold up to scrutiny.

[3]  In September of this year, Eyeo announced that it would partner with a UK-based ad tech startup called ComboTag to launch the“Acceptable Ads Platform” with which they would act also as an ad exchange, selling placements for “Acceptable Ad” slots.  Google, as might be expected, reacted negatively, stating that it would no longer do business with ComboTag. Some assumed that this might also signal an end to their participation in“Acceptable Ads” as well. However, this does not appear to be the case. Google still comprises a significant portion of the exception list on which “Acceptable Ads” is based and, as one ad industry observer put it, “Google is likely Adblock Plus’ largest, most lucrative customer.”

[4]  Google is also a member of the “Coalition for Better Ads”, an industry-wide effort which, like “Acceptable Ads”, focuses exclusively on issues of aesthetics and user experience, as opposed to surveillance and data profiling.