December 30, 2024

Summary of W3C DNT Workshop Submissions

Last week, we hosted the W3C “Web Tracking and User Privacy” Workshop here at CITP (sponsored by Adobe, Yahoo!, Google, Mozilla and Microsoft). If you were not able to join us for this event, I hope to summarize some of the discussion embodied in the roughly 60 position papers submitted.

The workshop attracted a wide range of participants; the agenda included advocates, academics, government, start-ups and established industry players from various sectors. Despite the broad name of the workshop, the discussion centered around “Do Not Track” (DNT) technologies and policy, essentially ways of ensuring that people have control, to some degree, over web profiling and tracking.

Unfortunately, I’m going to have to expect that you are familiar with the various proposals before going much further, as the workshop position papers are necessarily short and assume familiarity. (If you are new to this area, the CDT’s Alissa Cooper has a brief blog post from this past March, “Digging in on ‘Do Not Track'”, that mentions many of the discussion points. Technically, much of the discussion involved the mechanisms of the Mayer, Narayanan and Stamm IETF Internet-Draft from March and the Microsoft W3C member submission from February.)

Read on for more…

Technical Implementation: First, some quick background and updates: A number of papers point out how analogizing to a Do-Not-Call-like registry–I suppose where netizens would sign-up not to be tracked–would not work in the online tracking sense, so we should be careful not to shape the technology and policy too closely to Do-Not-Call. Having recognized that, the current technical proposals center around the Microsoft W3C submission and the Mayer et al. IETF submission, including some mix of a DNT HTTP header, a DNT DOM flag, and Tracking Protection Lists (TPLs). While the IETF submission focuses exclusively on the DNT HTTP Header, the W3C submission includes all three of these technologies. Browsers are moving pretty quickly here: Mozilla’s FireFox v4.0 browser includes the DNT header, Microsoft’s IE9 includes all three of these capabilities, Google’s Chrome browser now allows extensions to send the DNT Header through the WebRequest API and Apple has announced that the next version of its Safari browser will support the DNT header.

Some of the papers critique certain aspects of the three implementation options while some suggest other mechanisms entirely. CITP’s Harlan Yu includes an interesting discussion of the problems with DOM flag granularity and access control problems when third-party code included in a first-party site runs as if it were first-part code. Toubiana and Nissenbaum talk about a number of problems with the persistence of DNT exceptions (where a user opts back in) when a resource changes content or ownership and then go on to suggest profile-based opting-back-in based on a “topic” or grouping of websites. Avaya’s submission has a fascinating discussion of the problems with implementation of DNT within enterprise environments, where tracking-like mechanisms are used to make sure people are doing their jobs across disparate enterprise web-services; Avaya proposes a clever solution where the browser first checks to see if it can reach a resource only available internally to the enterprise (virtual) network, in which case it ignores DNT preferences for enterprise software tracking mechanisms. A slew of submissions from Aquin et al., Azigo and PDECC favor a culture of “self-tracking”, allowing and teaching people to know more about the digital traces they leave and giving them (or their agents) control over the use and release of their personal information. CASRO-ESOMAR and Apple have interesting discussions of gaming TPLs: CASRO-ESOMAR points out that a competitor could require a user to accept a TPL that blocks traffic from their competitors and Apple talks about spam-like DNS cycling as an example of an “arms race” response against TPLs.

Definitions: Many of the papers addressed definitions definitions definitions… mostly about what “tracking” means and what terms like “third-party” should mean. Many industry submissions such as Paypal, Adobe, SIIA, and Google urge caution so that good types of “tracking”, such as analytics and forensics, are not swept under the rug and further argue that clear definitions of the terms involved in DNT is crucial to avoid disrupting user expectations, innovation and the online ecosystem. Paypal points out, as have others, that domain names are not good indicators of third-party (e.g., metrics.apple.com is the Adobe Omniture service for apple.com and fb.com is equivalent to facebook.com). Ashkan Soltani’s submission distinguishes definitions for DNT that are a “do not use” conception vs. a “do not collect” conception and argues for a solution that “does not identify”, requiring the removal of any unique identifiers associated with the data. Soltani points out how this has interesting measurement/enforcement properties as if a user sees a unique ID in the DNI case, the site is doing it wrong.

Enforcement: Some raised the issue of enforcement; Mozilla, for example, wants to make sure that there are reasonable enforcement mechanisms to deal with entities that ignore DNT mechanisms. On the other side, so to speak, are those calling for self-regulation such as Comcast and SIIA vs. those advocating for explicit regulation. The opinion polling research groups, CASRO-ESOMAR, call explicitly for regulation no matter what DNT mechanism is ultimately adopted, such that DNT headers requests are clearly enforced or that TPLs are regulated tightly so as to not over-block legitimate research activities. Abine wants a cooperative market mechanism that results in a “healthy market system that is responsive to consumer outcome metrics” and that incentivizes advertising companies to work with privacy solution providers to increase consumer awareness and transparency around online tracking. Many of the industry players worried about definitions are also worried about over-prescription from a regulatory perspective; e.g., Datran Media is concerned about over-prescription via regulation that might stifle innovation in new business models. Hoofnagle et al. are evaluating the effectiveness of self-regulation, and find that the self-regulation programs currently in existence are greatly stilted in favor of industry and do not adequately embody consumer conceptions of privacy and tracking.

Research: There were a number of submissions addressing research that is ongoing and/or further needed to gauge various aspects of the DNT puzzle. The submissions from McDonald and Wang et al. describe user studies focusing, respectively, on what consumers expect from DNT–spoiler: they expect no collection of their data–and gauging the usability and effectiveness of current opt-out tools. Both of these lines of work argue for usable mechanisms that communicate how developers implement/envision DNT and how users can best express their preferences via these tools. NIST’s submission argues for empirical studies to set objective and usable standards for tracking protection and describes a current study of single sign-on (SSO) implementations. Thaw et al. discuss a proposal for incentivizing developers to communicate and design the various levels of rich data they need to perform certain kinds of ad targeting, and then uses a multi-arm bandit model to illustrate game-theoretic ad targeting that can be tweaked based on how much data they are allowed to collect. Finally, CASRO-ESOMAR makes a plea for exempting legitimate research purposes from DNT, so that opinion polling and academic research can avoid bias.

Transparency: A particularly fascinating thread of commentary to me was the extent to which submissions touched on or entirely focused on issues of transparency in tracking. Grossklags argues that DNT efforts will spark increased transparency but he’s not sure that will overcome some common consumer privacy barriers they see in research. Seltzer talks about the intimate relationship between transparency and privacy and concludes that a DNT header is not very transparent–in operation, not use–while TPLs are more transparent in that they are a user-side mechanism that users can inspect, change and verify correct operation. Google argues that there is a need for transparency in “what data is collected and how it is used”, leaving out the ability for users to effect or controls these things. In contrast, BlueKai also advocates for transparency in the sense of both accessing a user’s profile and user “control” over the data it collects, but it doesn’t and probably cannot extend this transparency to an understanding how BlueKai’s clients use this data. Datran Media describes their PreferenceCentral tool which allows opting out of brands the user doesn’t want targeting them (instead of ad networks, with which people are not familiar), which they argue is granular enough to avoid the “creepy” targeting feeling that users get from behavioral ads and also allow high-value targeted advertising. Evidon analogizes to physical world shopping transactions and concludes, smartly, “Anytime data that was not explicitly provided is explicitly used, there is a reflexive notion of privacy violation.” and “A permanently affixed ‘Not Me’ sign is not a representation of an engaged, meaningful choice.”

W3C vs. IETF: Finally, Mozilla seems to be the only submission that wrestles a bit with the “which standards-body?” question: W3C, IETF or some mix of both? They point out that the DNT Header is a broader issue than just web browsing so should be properly tackled by IETF where HTTP resides and the W3C effort could be focused on TPLs with a subcommittee for the DNT DOM element.

Finally, here are a bunch of submissions that don’t fit into the above categories that caught my eye:

  • Soghoian talks about the quantity and quality of information needed for security, law enforcement and fraud prevention is usually so big as to risk making it the exception that swallows the rule. Soghoian further recommends a total kibosh on certain nefarious technologies such as browser fingerprinting.

  • Lowenthal makes the very good point that browser vendors need to get more serious about managing security and privacy vulnerabilities, as that kind of risk can be best dealt with in the choke-point of the browsers that users choose, rather than the myriad of possible web entities. This would allow browsers to compete on privacy in terms of how privacy preserving they can be.

  • Mayer argues for a “generative” approach to a privacy choice signaling technology, highlighting that language preferences (via short codes) and browsing platform (via user-agent strings) are now sent as preferences in web requests and web sites are free to respond as they see fit. A DNT signaling mechanism like this would allow for great flexibility in how a web service responded to a DNT request, for example serving a DNT version of the site/resource, prompting the user for their preferences or asking for a payment before serving.

  • Yahoo points out that DNT will take a while to make it into the majority of browsers that users are using. They suggest a hybrid approach using the DAA CLEAR ad notice for backwards compatibility for browsers that don’t support DNT mechanisms and the DNT header for an opt-out that is persistent and enforceable.

Whew; I likely left out a lot of good stuff across the remaining submissions, but I hope that readers get an idea of some of the issues in play and can consult the submissions they find particularly interesting as this develops. We hope to have someone pen a “part 2” to this entry describing the discussion during the workshop and what the next steps in DNT will be.

Comments

  1. Anonymous says

    The jurisdictional issue raised by Ronald, as well as the specter raised by Mayer of sites punishing users for opting out of tracking (Mayer suggests by demanding payment for access to content, but simply denying access entirely seems even more likely), indicate to me that all of the proposals that rely on the cooperation of web sites that want to track users are non-starters, including the DNT header itself.

    A truly consumer-friendly mechanism for opting out of tracking must therefore be something that the end user can do locally, without requiring the cooperation of the web site they’re contacting, and that the fact of which they can conceal from that web site to evade attempts to punish those who refuse to submit to tracking.

    I suggest a three-pronged approach:

    1. Chameleon browser: defeat “browser fingerprinting” by making a browser/addon that effectively randomizes the fingerprint. Multiple requests to the same IP in a short time should have a consistent fingerprint, though, to prevent easy detection that a user’s browser is a chameleon, which sites that like to track would be motivated to try to detect and punish by e.g. withholding content.

    2. Default non-sending of third-party cookies, narrowly defined as “not the same domain as the one in the address bar”. That is, if I request a page from foo.com, it should send foo.com cookies that have been set with all of the foo.com requests and let foo.com set new cookies. But if it also requests an embedded image from ad.doubleclick.com it should send no cookies and permit that site to set no cookies on that request because I am not visiting ad.doubleclick.com, I am visiting foo.com. This will break some things, like facebook integration, that some users may want, though, so there should be a way for users to create exceptions for sites like facebook that they use; then, when objects at that site are requested while visiting a page of another site, normal cookie setting and sending will occur on those requests. Punishment evasion here may not be needed. If ad networks start noting if close together requests from a single IP show that that user isn’t letting them set cookies, and sending some “deny access” signal to the site embedding the ad, then there is difficulty. One possibility is never to request the ad in the first place; that might actually be much harder to detect. But it requires extending the third-party-cookie policy to third-party embeds in general. The missing images and iframes can have a URL on mouseover and a “load this thing” button for the user to click. This will have the added benefit to the user of blocking most advertising (and annoying forum sigs/avatars too) at the nuisance cost of sometimes having to click to see desired embedded images.

    3. A shared blocklist of known tracking URLs can be made available to a blocking tool in browsers. Crowdsourced (and vetted) knowledge of tracking entities would allow browsers to not fetch resources from URLs that are known to track, or even just to slow page loads while adding no value for the end-user. Ad networks whose ads are intrusive and irrelevant, Google Analytics, and the like are obvious candidates to include in such a list; Google AdWords tends to be more on the side of being potentially useful to the user at times so perhaps should be let through. Of course, users should be able to make local exceptions, both allows and blocks. Users could opt in to allowing what they block to be published (anonymously) to a clearinghouse; if enough users block the same thing, it may be automatically added to the general block list.

    If sites go to serious war with viewers who use browsers that employ such countermeasures, using tricks to detect repeat visits not accumulating tracking info or failure to load resources from third parties (e.g. they serve a page and then ask Google if they got a request from a certain IP for analytics.js with that referrer, and if not blacklist that IP for a while), then the war would have to be escalated with

    4. A widespread TOR-like or VPN-like service that allows most browsing requests to come from an effectively randomized IP address, with a single page load and all the embedded resources having one IP address, and the next having a completely different one, etc.

    There are two countermeasures against that. One only works if few enough people use it, and that’s to identify and block the exit IPs of these IP-randomizing services. The other is to heavily limit deep linking via referrer checks and to deny access to “inner” pages not only if the expected referrer URL is absent, but also if it’s present but the requesting IP didn’t recently load that page.

    At that point, the end users checkmate the evil corporate giants by building a whole new web built on distributed hashtables instead of specific servers for specific websites. 🙂 (An added bonus is that if the DHT is built on P2P mechanisms that don’t rely on DNS, it short-circuits ICE evilness/COICA and gets rid of the final single point of failure in the internet’s architecture, the root name server.)

  2. I guess we are mainly talking about US legislation….

    What about sites that are physically hosted outside of the US?
    What about sites that are maintained by corporations not registered in the US?