October 6, 2022

Archives for 2017

How have In-Flight Web Page Modification Practices Changed over the Past Ten Years?

When we browse the web, there are many parties and organizations that can see which websites we visit, because they sit on the path between web clients (our computers and mobile devices), and the web servers hosting the sites we request. Most obviously, Internet Service Providers (ISPs) are responsible for transmitting our web traffic, but reports (e.g. [1], [2], [3]) have shown that they may also inject ads into users’ requested web pages to increase revenue. Other parties may also intercept our web traffic for a wide variety of reasons: content-distribution networks (or CDNs) receive requests for websites that are geographically farther away to speed up response time, enterprise software and programs running on our devices may check incoming websites for added security or privacy before passing the website to our browser, and malicious adversaries may attempt to inject malware into requested web content before we receive it.


In 2007, a research group at the University of Washington conducted a study to measure how often these web page modifications occur in practice, and to determine who is responsible for the modifications. Web page modifications were identified using a small piece of software embedded in a test web page, a so-called “web tripwire”, that compared a known good representation of the web page with the version of the test web page users saw in their browsers. The researchers then attributed the modifications to ISPs, malicious attackers, and client software such as ad blockers, using IP addresses and by finding identifying keywords in the injected web content. They found that only about 1.3% of participating web clients saw page modifications. But much about how we interact with and browse the web has changed over the past ten years. More specifically, with the emergence of mobile technologies and new network parties such as CDNs, it is important to learn if and how these new developments have affected in-flight modification practices.


We invite you to take part in our research study. Following the same setup as the UW study, we have created a test web page containing a “web tripwire”. If it detects any in-flight page modifications in our test page, it sends us a copy of the modified version of our web page that your browser received. We minimize the information that we collect to detect page modifications. In addition to page modification data, we only record information that web servers normally record, such as IP address, browser type, date and time of page request, and a cookie to differentiate between users. We will permanently remove any personal information found in the page modifications before sending the modification data to our servers.


By participating in this study, you are helping us gather information crucial for guiding research and building tools to improve web privacy. If you’re willing to contribute to our study, it’s as simple as visiting our test web page: http://stormship.cs.princeton.edu. If possible, we also ask you to visit our page through multiple different devices and browsers, as this will help diversify our collected data. Our test page contains more details about our study, and we will post our results there when we have completed our measurements.

Please reach out to Annie Edmundson or Marcela Melara with any questions, concerns, or feedback. We greatly appreciate your help in our efforts to improve web privacy!

Why the FCC should prevent ISPs from micromanaging our lives

Why the FCC should prevent ISPs from micromanaging our lives
by Brett Frischmann and Evan Selinger*

Network neutrality prevents broadband Internet service providers from micromanaging our lives online. Constraining the networks this way enables and even empowers Internet users to be active and productive human beings rather than passive consumers. Unfortunately, the network neutrality debate is so polarized that neither side sees the full picture.

On one side, opponents of net neutrality view the Federal Communication Commission’s 2015 Open Internet order as “heavy-handed government regulation” that excessively meddles with broadband Internet Service Providers like Verizon and Comcast. Opponents are looking at a fun house mirror. Despite their repeated false claims that the government will micromanage the Internet through burdensome price regulation, the Open Internet Order only constrains micromanagement by broadband Internet Service Providers. Opponents ignore—if not intentionally distort—the concentrated private power of ISPs, while grossly exaggerating the scope and impact of the FCC’s actual rules.

On the other side, net neutrality advocates see the FCC’s intervention as light regulation that levels the playing field for edge providers—big content companies like Google and Netflix that deliver services to consumers from the edge of the network. Advocates worry that such providers can be squeezed out if ISPs discriminate in favor of their own programming or affiliates—think about on-demand television versus streaming television services, for example. While net neutrality proponents push for the right policy, they aren’t making the strongest case possible and often concede too much ground. In their rush to protect content providers, they shoot themselves in the foot by perpetuating the mythical division between edge providers and ordinary end-users, thus seeming to forget that everyone online is exchanging content.

We’re all content providers. Everything that occurs on the Internet is an exchange of data between end-users. That’s the beauty of the Internet. It opens the door widely for all of us to create, socialize, innovate, and possibly become the next Google or Wikipedia. Unfortunately, advocate distortion diminishes who we are, who we can be, and the social goods we can create by reductively portraying most of us as passive content consumers. What they don’t get or sufficiently highlight is that the infrastructure of the Internet profoundly impacts what we believe is important, true, and worth pursuing throughout all aspects of our lives.

Net Neutrality is a reflection of how society answers three fundamental political questions: Who decides what you do? Who decides who you communicate, transact, and collaborate with? Who decides how you should live your life?

To see what we mean, first consider how indispensable the Internet is for participating in modern life and how deeply it influences the ways we think and act, work and play. Beyond the growth in electronic commerce and innovations that were unimaginable only two decades ago, the Internet has radically increased entrepreneurship, political discourse, the production and consumption of media, social network formation, and community building. Indeed, the Internet provides and shapes essential opportunities for individuals, firms, households, and other organizations to interact with each other. The Internet hasn’t just reconfigured our lived-in environments and transformed capitalism. It’s re-engineered our world.

Next, consider how net neutrality is fundamentally about social control. The social value of the Internet is attributable to its openness, originally enshrined in end-to-end architecture and subsequently protected by the technical difficulties ISPs faced in figuring out who was doing what online in real-time with sufficient accuracy to exercise control over their activities.

Over the past two decades, ISPs invested in developing the technical capability to monitor data streams and develop actionable intelligence that allows them to manage traffic. Keep in mind that “traffic management” is a euphemism for surveillance and control. As with any infrastructure system, some control by ISPs is inevitable and desirable. “Network neutrality” only prevents broadband Internet service providers from using intelligence about end-users and uses (who’s doing what online) to exercise control through the specific means of blocking, throttling, and paid prioritization of data packets. It leaves untouched conventional means for managing congestion, such as congestion pricing, that do not rely on who’s doing what online and instead rely on the timing and quantity of traffic flows.

  • Clarification: Conventional congestion pricing that is practiced in other sectors and is well understood in economics does not require discrimination based on use or user and is not in any way precluded by the network neutrality rules. There are reasons we don’t see it much on the Internet, but the reasons are not legal constraints.

For the past decade, the FCC has struggled to create legal constraints that stand in for the disappearing technical limitations on ISP control. Every time the FCC succeeded, ISPs fought back and challenged the rules.

Network neutrality opponents fear that network neutrality rules will lead to price regulation. But those fears are unfounded. Most recently, the FCC’s 2015 Open Internet order reclassified broadband Internet access service as a telecommunications service subject to common carrier regulation under Title II of the Telecommunication Act. Title II lays the foundation for potentially restrictive government regulation, including [cue the scary, dramatic music: dah, dah, dah] price regulation. Fortunately, the FCC exercised substantial forbearance when enacting the Open Internet order. It only sought to solidify the basic principles that it has strongly supported since 2004 and that have been reflected in its previous policy statement, enforcement actions, merger conditions, and 2010 rules. Price regulation is not and has not ever been a feature of network neutrality. Still, the phantom of regulatory creep—that the FCC would use its authority to go beyond network neutrality, implement price controls, and take over the Internet—lurks and scares the hell out of conservatives, even though such expansion would never happen in our contemporary political climate.

In reality, network neutrality in general, and the 2015 Open Internet Order in particular, aim to prevent Broadband Internet service providers from micromanaging what we do online. As we extend Internet-connected sensors to other infrastructures—transportation and electricity—and into other spaces—cities, workplaces, and homes—society will need to grapple with how to govern intelligence and intelligence-enabled control. Frankly, if we can’t get network neutrality for the Internet, it’s hard to imagine we’ll get it for the other, high-stakes intelligence-enabled control systems we’re building.

* Brett M. Frischmann, Charles Widger Endowed University Professor in Law, Business and Economics, Villanova University, and Evan Selinger, Professor of Philosophy at Rochester Institute of Technology are co-authors of Re-Engineering Humanity, Cambridge University Press: forthcoming in April 2018.

How the Contextual Integrity Framework Helps Explain Children’s Understanding of Privacy and Security Online

This post discusses a new paper that will be presented at the 2018 ACM Conference on Computer Supported Cooperative Work and Social Computing (CSCW). I wrote this paper with co-authors Shalmali Naik, Utkarsha Devkar, Marshini Chetty, Tammy Clegg, and Jessica Vitak.

Watching YouTube during breakfast. Playing Animal Jam after school. Asking Google about snakes. Checking points on Class Dojo. Posting a lip-synching video on Musical.ly. These online activities are interspersed in the daily lives of today’s children. They also involve logging into an account, disclosing information, or exchanging messages with others—actions that can raise privacy and security concerns.

How do elementary school-age children conceptualize privacy and security online? What strategies do they and their parents use to help address such concerns? In interviews with 18 families, we found that children ages 5-11 understand some aspects of how privacy and security apply to online activities. And while children look to their parents for support, parents feel that privacy and security are largely a concern for the future, when their children are older, have their own smartphones, and spend more time on activities like social media. (For a summary of the paper, see this Princeton HCI post.)

Privacy scholar Helen Nissenbaum’s contextual integrity framework was developed to help identify what privacy concerns emerge through the use of new technology and what types of solutions can address those concerns. We found that the framework is also useful to explain what children know (and don’t know) about privacy online and what types of educational materials can enhance that knowledge.

What is contextual integrity? The contextual integrity framework considers privacy from the perspective of how information flows. People expect information to flow in a certain way in a given situation. When it does not, privacy concerns may arise. For example, the norms of a parent-teacher conference dictate that a teacher can reveal information about the parent’s child to the parent, but not about other children. Four parameters influence these norms:

  • Context: This relates to the backdrop against which a given situation occurs.  A parent-teacher conference occurs within an educational context.
  • Attributes: This refers to the types of information involved in a particular context. The parent-teacher conference involves information about a child’s academic performance and behavioral patterns, but not necessarily the child’s medical history.
  • Actors: This concerns the parties involved in a given situation. In a parent-teacher conference, the teacher (sender) discloses information about the student (subject) to the parent (recipient).
  • Transmission Principles: This involves constraints that affect the flow of information. For example, information shared during a parent-teacher conference is unidirectional (i.e. teachers don’t share information about their own children with parents) and confidential (i.e. social norms and legal restrictions prevent teachers from sharing such information with the entire school).

How does the contextual integrity framework help us understand what children know about privacy and security online? In our interviews, we found that children largely understood how attributes and actors could affect privacy and security online. They knew that certain types of information, such as a password, deserved more protection than others. They also recognized that it was more appropriate to share information with known parties, such as parents and teachers, rather than strangers or unknown people online.

But children under age 10 struggled to grasp how interacting online could violate transmission principles by, for example, enabling unintended actors to see information. Only one child recognized that someone could take information shared in a chat message and repost it elsewhere, potentially spreading it far beyond its intended audience. Children also struggled to understand how the context of a situation could inform decisions about how to appropriately share information. They largely used the heuristic of “Could I get in trouble for this?” to guide behavior.

How do children and parents navigate privacy and security online? While a few children understood that restricting access to information or providing false information online could help them protect their privacy, most relied on their parents for support in navigating potentially concerning situations. Parents primarily used passive strategies to manage their children’s technology use. They maintained a general awareness of what their children were doing, primarily by telling children to use devices only when parents were around. They minimized the chances that their children would download additional apps or spend money by withholding the passwords for app stores.

Most parents felt their children were too young to face privacy or security risks online. But elementary school-age children already engage in a variety of activities online, and our results show they can absorb lessons related to privacy and security. Childrens’ willingness to rely on parents suggests that parents have an opportunity to usher their children’s knowledge to the next level. And parents may have an easier time doing so before their children reach adolescence and lose interest in listening to parents.

How can the contextual integrity framework inform children’s learning about privacy and security online? The contextual integrity framework can guide the development of relevant materials that parents and others can use to scaffold their children’s learning. For example, the development of a child-friendly ad blocker could help show children that other actors, such as companies and trackers, can “see” what people do online. Videos or games that explain, in an age-appropriate manner, how the Internet works, can help children understand how the Internet can challenge transmission principles such as confidentiality. Integrating privacy and security-related lessons into apps and websites that children already use can help refine their understanding of how contexts and norms shape decisions to disclose information. For example, the website for the public broadcasting channel PBS Kids instructs children to avoid using personal information, such as their last name or address, in a username.

As the boundaries between offline and online life continue to fade, privacy and security knowledge remains critical for people of all ages. Theoretical frameworks like contextual integrity help us understand how to to evaluate and enhance that knowledge.

For more information, read the full paper.

AI and Policy Event in DC, December 8

Princeton’s Center for Information Technology Policy (CITP) recently launched an initiative on Artificial Intelligence, Machine Learning, and Public Policy.  On Friday, December 8, 2017, we’ll be in Washington DC talking about AI and policy.

The event is at the National Press Club, at 12:15-2:15pm on Friday, December 8.  Lunch will be provided for those who register in advance.

The agenda includes:

  • Ed Felten, with a background briefing on AI and the AI policy landscape,
  • Arvind Narayanan on AI and fairness,
  • Olga Russakovsky on diversifying the AI workforce,
  • Chloe Bakalar on AI and ethics, and
  • Nick Feamster on AI and freedom of expression.

For those who can stay longer, we’ll have a roundtable discussion with the speakers, starting at 2:30.