April 24, 2014

avatar

Web Tracking and User Privacy Workshop: Test Cases for Privacy on the Web

This guest post is from Nick Doty, of the W3C and UC Berkeley School of Information. As a companion post to my summary of the position papers submitted for last month’s W3C Do-Not-Track Workshop, hosted by CITP, Nick goes deeper into the substance and interaction during the workshop.

The level of interest and participation in last month’s Workshop on Web Tracking and User Privacy — about a hundred attendees spanning multiple countries, dozens of companies, a wide variety of backgrounds — confirms the broad interest in Do Not Track. The relatively straightforward technical approach with a catchy name has led to, in the US, proposed legislation at both the state and federal level and specific mention by the Federal Trade Commission (it was nice to have Ed Felten back from DC representing his new employer at the workshop), and comparatively rapid deployment of competing proposals by browser vendors. Still, one might be surprised that so many players are devoting such engineering resources to a relatively narrow goal: building technical means that allow users to avoid tracking across the Web for the purpose of compiling behavioral profiles for targeted advertising.

In fact, Do Not Track (in all its variations and competing proposals) is the latest test case for how new online technologies will address privacy issues. What mix of minimization techniques (where one might classify Microsoft’s Tracking Protection block lists) versus preference expression and use limitation (like a Do Not Track header) will best protect privacy and allow for innovation? Can parties agree on a machine-readable expression of privacy preferences (as has been heavily debated in P3P, GeoPriv and other standards work), and if so, how will terms be defined and compliance monitored and enforced? Many attendees were at the workshop not just to address this particular privacy problem — ubiquitous invisible tracking of Web requests to build behavioral profiles — but to grab a seat at the table where the future of how privacy is handled on the Web may be decided. The W3C, for its part, expects to start an Interest Group to monitor privacy on the Web and spin out specific work as new privacy issues inevitably arise, in addition to considering a Working Group to address this particular topic (more below). The Internet Engineering Task Force (IETF) is exploring a Privacy Directorate to provide guidance on privacy considerations across specs.

At a higher level, this debate presents a test case for the process of building consensus and developing standards around technologies like tracking protection or Do Not Track that have inspired controversy. What body (or rather, combination of bodies) can legitimately define preference expressions that must operate at multiple levels in the Web stack, not to mention serve the diverse needs of individuals and entities across the globe? Can the same organization that defines the technical design also negotiate semantic agreement between very diverse groups on the meaning of “tracking”? Is this an appropriate role for technical standards bodies to assume? To what extent can technical groups work with policymakers to build solutions that can be enforced by self-regulatory or governmental players?

Discussion at the recent workshop confirmed many of these complexities: though the agenda was organized to roughly separate user experience, technical granularity, enforcement and standardization, overlap was common and inevitable. Proposals for an “ack” or response header brought up questions of whether the opportunity to disclaim following the preference would prevent legal enforcement; whether not having such a response would leave users confused about when they had opted back in; and how granular such header responses should be. In defining first vs. third party tracking, user expectations, current Web business models and even the same-origin security policy could point the group in different directions.

We did see some moments of consensus. There was general agreement that while user interface issues were key to privacy, trying to standardize those elements was probably counterproductive but providing guidance could help significantly. Regarding the scope of “tracking”, the group was roughly evenly divided on what they would most prefer: a broad definition (any logging), a narrow definition (online behavioral advertising profiling only) or something in between (where tracking is more than OBA but excludes things like analytics or fraud protection, as in the proposal from the Center for Democracy and Technology). But in a “hum” to see which proposals workshop attendees opposed (“non-starters”) no one objected to starting with a CDT-style middle ground — a rather shocking level of agreement to end two days chock full of debate.

For tech policy nerds, then, this intimate workshop about a couple of narrow technical proposals was heady stuff. And the points of agreement suggest that real interoperable progress on tracking protection — the kind that will help the average end user’s privacy — is on the way. For the W3C, this will certainly be a topic of discussion at the ongoing meeting in Bilbao, and we’re beginning detailed conversations about the scope and milestones for a Working Group to undertake technical standards work.

Thanks again to Princeton/CITP for hosting the event, and to Thomas and Lorrie for organizing it: bringing together this diverse group of people on short notice was a real challenge, and it paid off for all of us. If you’d like to see more primary materials: minutes from the workshop (including presentations and discussions) are available, as are the position papers and slides. And the W3C will post a workshop report with a more detailed summary very soon.

avatar

Building a better CA infrastructure

As several Tor project authors, Ben Adida and many others have written, our certificate authority infrastructure has the flaw that any one CA, anywhere on the planet, can issue a certificate for any web site, anywhere else on the planet. This was tolerable when the only game in town was VeriSign, but now that’s just untenable. So what solutions are available?

First, some non-solutions: Extended validation certs do nothing useful. Will users be properly trained to look for the extra changes in browser behavior as to scream when they’re absent via a normal cert? Fat chance. Similarly, certificate revocation lists buy you nothing if you can’t actually download them (a notable issue if you’re stuck behind the firewall of somebody who wants to attack you).

A straightforward idea is to track the certs you see over time and generate a prominent warning if you see something anomalous. This is available as a fully-functioning Firefox extension, Certificate Patrol. This should be built into every browser.

In addition to your first-hand personal observations, why not leverage other resources on the network to make their own observations? For example, while Google is crawling the web, it can easily save SSL/TLS certificates when it sees them, and browsers could use a real-time API much like Google SafeBrowsing. A research group at CMU has already built something like this, which they call a network notary. In essence, you can have multiple network services, running from different vantage points in the network, all telling you whether the cryptographic credentials you got match what others are seeing. Of course, if you’re stuck behind an attacker’s firewall, the attacker will similarly filter out all these sites.

UPDATE: Google is now doing almost exactly what I suggested.

There are a variety of other proposals out there, notably trying to leverage DNSSEC to enhance or supplant the need for SSL/TLS certificates. Since DNSSEC provides more control over your DNS records, it also provides more control over who can issue SSL/TLS certificates for your web site. If and when DNSSEC becomes universally supported, this would be a bit harder for attacker firewalls to filter without breaking everything, so I certainly hope this takes off.

Let’s say that future browsers properly use all of these tricks and can unquestionably determine for you, with perfect accuracy, when you’re getting a bogus connection. Your browser will display an impressive error dialog and refuses to load the web site. Is that sufficient? This will certainly break all the hotel WiFi systems that want to redirect you to an internal site where they can charge you to access the network. (Arguably, this sort of functionality belongs elsewhere in the software stack, such as through IEEE 802.21, notably used to connect AT&T iPhones to the WiFi service at Starbucks.) Beyond that, though, should the browser just steadfastly refuse to allow the connection? I’ve been to at least one organization whose internal WiFi network insists that it proxy all of your https sessions and, in fact, issues fabricated certificates that you’re expected to configure your browser to trust. We need to support that sort of thing when it’s required, but again, it would perhaps best be supported by some kind of side-channel protocol extension, not by doing a deliberate MITM attack on the crypto protocol.

Corner cases aside, what if you’re truly in a hostile environment and your browser has genuinely detected a network adversary? Should the browser refuse the connection, or should there be some other option? And if so, what would that be? Should the browser perhaps allow the connection (with much gnashing of teeth and throbbing red borders on the window)? Should previous cookies and saved state be hidden away? Should web sites like Gmail and Facebook allow users to have two separate passwords, one for “genuine” login and a separate one for “Yes, I’m in a hostile location, but I need to send and receive email in a limited but still useful fashion?”

[Editor's note: you may also be interested in the many prior posts on this topic by Freedom to Tinker contributors: 1, 2, 3, 4, 5, 6, 7, 8 -- as well as the "Emerging Threats to Online Trust: The Role of Public Policy and Browser Certificates" event that CITP hosted in DC last year with policymakers, industry, and activists.]

avatar

A Free Internet, If We Can Keep It

“We stand for a single internet where all of humanity has equal access to knowledge and ideas. And we recognize that the world’s information infrastructure will become what we and others make of it. ”

These two sentences, from Secretary of State Clinton’s groundbreaking speech on Internet freedom, sum up beautifully the challenge facing our Internet policy. An open Internet can advance our values and support our interests; but we will only get there if we make some difficult choices now.

One of these choices relates to anonymity. Will it be easy to speak anonymously on the Internet, or not? This was the subject of the first question in the post-speech Q&A:

QUESTION: You talked about anonymity on line and how we have to prevent that. But you also talk about censorship by governments. And I’m struck by – having a veil of anonymity in certain situations is actually quite beneficial. So are you looking to strike a balance between that and this emphasis on censorship?

SECRETARY CLINTON: Absolutely. I mean, this is one of the challenges we face. On the one hand, anonymity protects the exploitation of children. And on the other hand, anonymity protects the free expression of opposition to repressive governments. Anonymity allows the theft of intellectual property, but anonymity also permits people to come together in settings that gives them some basis for free expression without identifying themselves.

None of this will be easy. I think that’s a fair statement. I think, as I said, we all have varying needs and rights and responsibilities. But I think these overriding principles should be our guiding light. We should err on the side of openness and do everything possible to create that, recognizing, as with any rule or any statement of principle, there are going to be exceptions.

So how we go after this, I think, is now what we’re requesting many of you who are experts in this area to lend your help to us in doing. We need the guidance of technology experts. In my experience, most of them are younger than 40, but not all are younger than 40. And we need the companies that do this, and we need the dissident voices who have actually lived on the front lines so that we can try to work through the best way to make that balance you referred to.

Secretary Clinton’s answer is trying to balance competing interests, which is what good politicians do. If we want A, and we want B, and A is in tension with B, can we have some A and some B together? Is there some way to give up a little A in exchange for a lot of B? That’s a useful way to start the discussion.

But sometimes you have to choose — sometimes A and B are profoundly incompatible. That seems to be the case here. Consider the position of a repressive government that wants to spy on a citizen’s political speech, as compared to the position of the U.S. government when it wants to eavesdrop on a suspect’s conversations under a valid search warrant. The two positions are very different morally, but they are pretty much the same technologically. Which means that either both governments can eavesdrop, or neither can. We have to choose.

Secretary Clinton saw this tension, and, being a lawyer, she saw that law could not resolve it. So she expressed the hope that technology, the aspect she understood least, would offer a solution. This is a common pattern: Given a difficult technology policy problem, lawyers will tend to seek technology solutions and technologists will tend to seek legal solutions. (Paul Ohm calls this “Felten’s Third Law”.) It’s easy to reject non-solutions in your own area because you have the knowledge to recognize why they will fail; but there must be a solution lurking somewhere in the unexplored wilderness of the other area.

If we’re forced to choose — and we will be — what kind of Internet will we have? In Secretary Clinton’s words, “the world’s information infrastructure will become what we and others make of it.” We’ll have a free Internet, if we can keep it.

avatar

There’s anonymity on the Internet. Get over it.

In a recent interview prominent antivirus developer Eugene Kaspersky decried the role of anonymity in cybercrime. This is not a new claim – it is touched on in the Commission on Cybersecurity for the 44th Presidency Report and Cybersecurity Act of 2009, among others – but it misses the mark. Any Internet design would allow anonymity. What renders our Internet vulnerable is primarily weakness of software security and authentication, not anonymity.

Consider a hypothetical of three Internet users: Alice, Bob, and Charlie. If Alice wants to communicate anonymously with Charlie, she may relay her messages through Bob. While Charlie knows Bob is an intermediary, Charlie does not know with whom he is ultimately communicating. For even greater anonymity Alice can pass her messages through multiple Bobs, and by applying cryptography she can ensure no individual Bob can piece together that she is communicating with Charlie. This basic approach to anonymity is remarkable in its independence of the Internet’s design: it only requires that some Bob(s) can and do run intermediary software. Even on an Internet where users could verify each other’s identity this means of anonymity would remain viable.

The sad state of software security – the latest DHS weekly bulletin alone identified over 40 “high severity” vulnerabilities – is what enables malicious users to exploit the Internet’s indelible capacity for anonymity. Modifying the prior hypothetical, suppose Alice now wants to spam, phish, denial of service (DoS) attack, or hack Charlie. After compromising Bob’s computer with malicious software (malware), Alice can send emails, host websites, and launch DoS attacks from it; Charlie knows Bob is apparently misbehaving, but has no means of discovering Alice’s role. Nearly all spam, phishing, and DoS attacks are now perpetrated with networks of compromised computers like Bob’s (botnets). At the writing of a July 2009 private sector report, just five botnets sourced nearly 75% of spam. Worse yet, botnets are increasingly self-perpetuating: spam and phishing websites propagate malware that compromises new computers for the botnet.

Shortcomings in authentication, the means of proving one’s identity either when necessary or at all times, are a secondary contributor to the Internet’s ills. Most applications rely on passwords, which are easily guessed or divulged through deception – the very mechanisms of most phishing and account hijacking. There are potential technical solutions that would enable a user to authenticate themselves without the risk of compromising accounts. But any approach will be undermined by weaknesses in underlying software security when a malicious party can trivially compromise a user’s computer.

The policy community is already trending towards acceptance of Internet anonymity and refocusing on software security and authentication; the recent White House Cyberspace Policy Review in particular emphasizes both issues. To the remaining unpersuaded, I can only offer at last a truism: There’s anonymity on the Internet. Get over it.

avatar

Net Neutrality: When is Network Management "Reasonable"?

Last week the FCC released its much-awaited Notice of Proposed Rulemaking (NPRM) on network neutrality. As expected, the NPRM affirms past FCC neutrality principles, and adds two more. Here’s the key language:

1. Subject to reasonable network management, a provider of broadband Internet access service may not prevent any of its users from sending or receiving the lawful content of the user’s choice over the Internet.

2. Subject to reasonable network management, a provider of broadband Internet access service may not prevent any of its users from running the lawful applications or using the lawful services of the user’s choice.

3. Subject to reasonable network management, a provider of broadband Internet access service may not prevent any of its users from connecting to and using on its network the user’s choice of lawful devices that do not harm the network.

4. Subject to reasonable network management, a provider of broadband Internet access service may not deprive any of its users of the user’s entitlement to competition among network providers, application providers, service providers, and content providers.

5. Subject to reasonable network management, a provider of broadband Internet access service must treat lawful content, applications, and services in a nondiscriminatory manner.

6. Subject to reasonable network management, a provider of broadband Internet access service must disclose such information concerning network management and other practices as is reasonably required for users and content, application, and service providers to enjoy the protections specified in this part.

That’s a lot of policy packed into (relatively) few words. I expect that my colleagues and I will have a lot to say about these seemingly simple rules over the coming weeks.

Today I want to focus on the all-purpose exception for “reasonable network management”. Unpacking this term might tell us a lot about how the proposed rule would operate.

Here’s what the NPRM says:

Reasonable network management consists of: (a) reasonable practices employed by a provider of broadband Internet access to (i) reduce or mitigate the effects of congestion on its network or to address quality-of-service concerns; (ii) address traffic that is unwanted by users or harmful; (iii) prevent the transfer of unlawful content; or (iv) prevent the unlawful transfer of content; and (b) other reasonable network management practices.

The key word is “reasonable”, and in that respect the definition is nearly circular: in order to be “reasonable”, a network management practice must be (a) “reasonable” and directed toward certain specific ends, or (b) “reasonable”.

In the FCC’s defense, it does seek comments and suggestions on what the definition should be, and it does say that it intends to make case-by-case determinations in practice, as it did in the Comcast matter. Further, it rejects a “strict scrutiny” standard of the sort that David Robinson rightly criticized in a previous post.

“Reasonable” is hard to define because in real life every “network management” measure will have tradeoffs. For example, a measure intended to block copyright-infringing material would in practice make errors in both directions: it would block X% (less than 100%) of infringing material, while as a side-effect also blocking Y% (more than 0%) of non-infringing material. For what values of X and Y is such a measure “reasonable”? We don’t know.

Of course, declaring a vague standard rather than a bright-line rule can sometimes be good policy, especially where the facts on the ground are changing rapidly and it’s hard to predict what kind of details might turn out to be important in a dispute. Still, by choosing a case-by-case approach, the FCC is leaving us mostly in the dark about where it will draw the line between “reasonable” and “unreasonable”.

avatar

Android Open Source Model Has a Short Circuit

[Update: Google subsequently worked out a mechanism that allows Cyanogen and others to distribute their mods separate from the Google Apps.]

Last year, Google entered the mobile phone market with a Linux-based mobile operating system. The company brought together device manufacturers and carriers in the Open Handset Alliance, explaining that, “Together we have developed Android™, the first complete, open, and free mobile platform.” There has been considerable engagement from the open source developer community, as well as significant uptake from consumers. Android may have even been instrumental in motivating competing open platforms like LiMo. In addition to the underlying open source operating system, Google chose to package essential (but proprietary) applications with Android-based handsets. These applications include most of the things that make the handsets useful (including basic functions to sync with the data network). This two-tier system of rights has created a minor controversy.

A group of smart open source developers created a modified version of the Android+Apps package, called Cyanogen. It incorporated many useful and performance-enhancing updates to the Android OS, and included unchanged versions of the proprietary Apps. If Cyanogen hadn’t included the Apps, the package would have been essentially useless, given that Google doesn’t appear to provide a means to install the Apps on a device that has only a basic OS. As Cyanogen gained popularity, Google decided that it could no longer watch the project distribute their copyright-protected works. The lawyers at Google decided that they needed to send a Cease & Desist letter to the Cyanogen developer, which caused him to take the files off of his site and spurred backlash from the developer community.

Android represents a careful balance on the part of Google, in which the company seeks to foster open platforms but maintain control over its proprietary (but free) services. Google has stated as much, in response to the current debate. Android is an exciting alternative to the largely closed-source model that has dominated the mobile market to date. Google closely integrated their Apps with the operating system in a way that makes for a tremendously useful platform, but in doing so hampered the ability of third-party developers to fully contribute to the system. Perhaps the problem is simply that they did not choose the right location to draw the line between open vs. closed source — or free-to-distribute vs. not.

The latter distinction might offer a way out of the conundrum. Google could certainly grant blanket rights to third-parties to redistribute unchanged versions of their Apps. This might compromise their ability to make certain business arrangements with carriers or handset providers in which they package the software for a fee. That may or may not be worth it from their business perspective, but they could have trouble making the claim that Android is a “complete, open, and free mobile platform” if they don’t find a way to make it work for developers.

This all takes place in the context of a larger debate over the extent to which mobile platforms should be open — voluntarily or via regulatory mandate. Google and Apple have been arguing via letters to the FCC about whether or not Apple should allow the Google Voice application in the iPhone App Store. However, it is yet to be determined whether the Commission has the jurisdiction and political will to do anything about the issue. There is a fascinating sideshow in that particular dispute, in which AT&T has made the very novel claim that Google Voice violates network neutrality (well, either that or common carriage — they’ll take whichever argument they can win). Google has replied. This is a topic for another day, but suffice to say the clear regulatory distinctions between telephone networks, broadband, and devices have become muddied.

(Cross-posted to Managing Miracles)

avatar

The Markey Net Neutrality Bill: Least Restrictive Network Management?

It’s an exciting time in the net neutrality debate. FCC Chairman Jules Genachowski’s speech on Monday promised a new FCC proceeding that will aim to create a formal rule to replace the Commission’s existing policy statement.

Meanwhile, net neutrality advocates in Congress are pondering new legislation for two reasons: First, there is a debate about whether the FCC currently has enough authority to enforce a net neutrality rule. Second, regardless of whether the Commission has such authority today or doesn’t, some would rather see net neutrality rules etched into statute than leave them to the uncertainties of the rulemaking process under this and future Commissions.

One legislative proposal comes from Rep. Ed Markey and colleagues. Called the Internet Freedom Preservation Act of 2009, its current draft is available on the Free Press web site.

I favor the broad goals that motivate this bill — an Internet that remains friendly to innovation and broadly available. But I personally believe the current draft of this bill would be a mistake, because it embodies a very optimistic view of the FCC’s ability to wield regulatory authority and avoid regulatory capture, not only under the current administration but also over the long-run future. It puts a huge amount of statutory weight behind the vague-till-now idea of “reasonable network management” — something that the FCC’s policy statement (and many participants in the debate) have said ISPs should be permitted to do, but whose meaning remains unsettled. Indeed, Ed raised questions back in 2006 about just how hard it might be to decide what this phrase should mean.

The section of the Markey bill that would be labeled as section 12 (d) in statute says that a network management practice

. . . is a reasonable practice only if it furthers a critically important interest, is narrowly tailored to further that interest, and is the means of furthering that interest that is the least restrictive, least discriminatory, and least constricting of consumer choice available.

This language — particularly the trio of “leasts” — puts the FCC in a position to intervene if, in the Commission’s judgment, any alternative course of action would have been better for consumers than the one an ISP actually took. Normally, to call something “reasonable” means that it is within the broad range of possibilities that might make sense to an imagined “reasonable person.” This bill’s definition of “reasonable” is very different, since on its terms there is no scope for discretion within reasonableness — the single best option is the only one deemed reasonable by the statute.

The bill’s language may sound familiar — it is a modified form of the judicial “strict scrutiny” standard the courts use to review government action when the state uses a suspect classification (such as race) or burdens a fundamental right (such as free speech in certain contexts). In those cases, the question is whether or not a “compelling governmental interest” justifies the policy under review. Here, however, it’s not totally clear whose interest, in what, must be compelling in order for a given network management practice to count as reasonable. We are discussing the actions of ISPs, who are generally public companies– do their interests in profit maximization count as compelling? Shareholders certainly think so. What about their interests in R&D? Or, does the statute mean to single out the public’s interest in the general goods outlined in section 12 (a), such as “protect[ing] the open and interconnected nature of broadband networks” ?

I fear the bill would spur a food fight among ISPs, each of whom could complain about what the others were doing. Such a battle would raise the probability that those ISPs with the most effective lobbying shops will prevail over those with the most attractive offerings for consumers, if and when the two diverge.

Why use the phrase “reasonable network management” to describe this exacting standard? I think the most likely answer is simply that many participants in the net neutrality debate use the phrase as a shorthand term for whatever should be allowed — so that “reasonable” turns out to mean “permitted.”

There is also an interesting secondary conversation to be had here about whether it’s smart to bar in statue, as the Markey bill would, “. . .any offering that. . . prioritizes traffic over that of other such providers,” which could be read to bar evenhanded offers of prioritized packet routing to any customer who wants to pay a premium, something many net neutrality advocates (including, e.g. Prof. Lessig) have said they think is fine.

My bottom line is that we ought to speak clearly. It might or might not make sense to let the FCC intervene whenever it finds ISPs’ network management to be less than perfect (I think it would not, but recognize the question is debatable). But whatever its merits, a standard like that — removing ISP discretion — deserves a name of its own. Perhaps “least restrictive network management” ?

Cross-posted at the Yale ISP Blog.

avatar

On China's new, mandatory censorship software

The New York Times reports that China will start requiring censorship software on PCs. One interesting quote stands out:

Zhang Chenming, general manager of Jinhui Computer System Engineering, a company that helped create Green Dam, said worries that the software could be used to censor a broad range of content or monitor Internet use were overblown. He insisted that the software, which neutralizes programs designed to override China’s so-called Great Firewall, could simply be deleted or temporarily turned off by the user. “A parent can still use this computer to go to porn,” he said.

In this post, I’d like to consider the different capabilities that software like this could give to the Chinese authorities, without getting too much into their motives.

Firstly, and most obviously, this software allows the authorities to do filtering of web sites and network services that originate inside or outside of the Great Firewall. By operating directly on a client machine, this filter can be aware of the operations of Tor, VPNs, and other firewall-evading software, allowing connections to a given target machine to be blocked, regardless of how the client tries to get there. (You can’t accomplish “surgical” Tor and VPN filtering if you’re only operating inside the network. You need to be on the end host to see where the connection is ultimately going.)

Software like this can do far more, since it can presumably be updated remotely to support any feature desired by the government authorities. This could be the ultimate “Big Brother Inside” feature. Not only can the authorities observe behavior or scan files within one given computer, but every computer now because a launching point for investigating other machines reachable over a local area network. If one such machine were connected, for example, to a private home network, behind a security firewall, the government software could still scan every other computer on the same private network, log every packet, and so forth. Would you be willing to give your friends the password to log into your private wireless network, knowing their machine might be running this software?

Perhaps less ominously, software like this could also be used to force users to install security patches, to uninstall zombie/botnet systems, and perform other sorts of remote systems administration. I can’t imagine the difficulty in trying to run the Central Government Bureau of National Systems Administration (would they have a phone number you could call to complain when your computer isn’t working, and could they fix it remotely?), but the technological base is now there.

Of course, anybody who owns their own computer will be able to circumvent this software. If you control your machine, you can control what’s running on it. Maybe you can pretend to be running the software, maybe not. That would turn into a technological arms race which the authorities would ultimately fail to win, though they might succeed in creating enough fear, uncertainty, and doubt to deter would-be circumventors.

This software will also have a notable impact in Internet cafes, schools, and other sorts of “public” computing resources, which are exactly the sorts of places that people might go when they want to hide their identity, and where the authorities could have physical audits to check for compliance.

Big Brother is watching.

avatar

Chinese Internet Censorship: See It For Yourself

You probably know already that the Chinese government censors Internet traffic. But you might not have known that you can experience this censorship yourself. Here’s how:

(1) Open up another browser window or tab, so you can browse without losing this page.

(2) In the other window, browse to baidu.com. This is a search engine located in China.

(3) Search for an innocuous term such as “freedom to tinker”. You’ll see a list of search results, sent back by Baidu’s servers in China.

(4) Now return to the main page of baidu.com, and search for “Falun Gong”. [Falun Gong is a dissident religious group that is banned in China.]

(5) At this point your browser will report an error — it might say that the connection was interrupted or that the page could not be loaded. What really happened is that the Great Firewall of China saw your Internet packets, containing the forbidden term “Falun Gong”, and responded by disrupting your connection to Baidu.

(6) Now try to go back to the Baidu home page. You’ll find that this connection is disrupted too. Just a minute ago, you could visit the Baidu page with no trouble, but now you’re blocked. The Great Firewall is now cutting you off from Baidu, because you searched for Falun Gong.

(7) After a few minutes, you’ll be allowed to connect to Baidu again, and you can do more experiments.

(Reportedly, users in China see different behavior. When they search for “Falun Gong” on Baidu, the connection isn’t blocked. Instead, they see “sanitized” search results, containing only pages that criticize Falun Gong.)

If you do try more experiments, feel free to report your results in the comments.

avatar

A "Social Networking Safety Act"

At the behest of the state Attorney General, legislation to make MySpace and Facebook safer for children is gaining momentum in the New Jersey State Legislature.

The proposed Social Networking Safety Act, heavily marked-up with floor amendments, is available here. An accompanying statement describes the Legislative purpose. Explanations of the floor amendments are available here.

This bill would deputize MySpace and Facebook to serve as a branch of law enforcement. It does so in a very subtle way.

On the surface, it appears to be a perfectly reasonable response to concerns about cyberbullies in general and to the Lori Drew case in particular. New Jersey was the first state in the nation to pass Megan’s Law, requiring information about registered sex offenders to be made available to the public, and state officials hope to play a similar, pioneering role in the fight against cyberbullying.

The proposed legislation creates a civil right of action for customers who are offended by what they read on MySpace or Facebook. It allows the social network provider to sue customers who post “sexually offensive” or “harassing” communications. Here’s the statutory language:

No person shall transmit a sexually offensive communication through a social networking website to a person located in New Jersey who the actor knows or should know is less than 13 years of age, or is at least 13 but less than 16 years old and at least four years younger than the actor. A person who transmits a sexually offensive communication in violation of this subsection shall be liable to the social networking website operator in a civil action for damages of $1,000, plus reasonable attorney’s fees, for each violation. A person who transmits a sexually offensive communication in violation of this subsection shall also be liable to the recipient of the communication in a civil action for damages in the amount of $5,000, plus reasonable attorney’s fees, or actual damages…

The bill requires social network providers to design their user interfaces with icons that will allow customers to report “sexually offensive” or “harassing” communications:

A social networking website operator shall not be deemed to be in violation … if the operator maintains a reporting mechanism available to the user that meets the following requirements: (1) the social networking website displays, in a conspicuous location, a readily identifiable icon or link that enables a user or third party to report to the social networking website operator a sexually offensive communication or harassing communication transmitted through the social networking website.

Moreover, the social network provider must investigate complaints, call the police when “appropriate” and banish offenders:

A social networking website operator shall not be deemed to be in violation … if … (2) the operator conducts a review, in the most expedient time possible without unreasonable delay, of any report by a user or visitor, including investigation and referral to law enforcement if appropriate, and provides users and visitors with the opportunity to determine the status of the operator’s review or investigation of any such report.

Finally, if the social network provider fails to take action, it can be sued for consumer fraud:

[I]t shall be an unlawful practice and a violation of P.L.1960, c.39 (C.56:8-1 et seq.) [the state Consumer Fraud Act] for a social networking website operator to fail to revoke, in the most expedient time possible without unreasonable delay, the website access of any user or visitor upon receipt of information that provides a reasonable basis to conclude that the visitor has violated [this statute]“

So what’s the problem? It’s not a criminal statute, and we do want to shut down sex offenders and cyberbullies. How could anyone object to this proposed measure?

First, the proposed law puts a special burden on one specific type of technology. It’s as if the newfangledness of social networking—and its allure for kids—have made it a special target for our fears about sex offenders and cyberbullies. No similar requirements are being placed on e-mail providers, wikis, blogs or the phone company.

Second, it deputizes private companies to do the job of law enforcement. Social network providers will have to evaluate complaints and decide when to call the police.

Third, it’s the thin edge of the wedge. If social network providers have to investigate and report criminal activity, they will be enlisted to do more. Today, sex offenders and cyberbullies. Tomorrow, drug deals, terrorist threats and pornography.

Fourth, this raises First Amendment concerns. Social network providers, if they are called upon to monitor and punish “offensive” and “harassing” speech, effectively become an arm of law enforcement. To avoid the risk of lawsuits under the Consumer Fraud Act, they will have an incentive to ban speech that is protected under the First Amendment.

Fifth, the definitions of “offensive” and “harassing” are vague. The bill invokes the “reasonable person” standard, which is okay for garden-variety negligence cases, but not for constitutional issues like freedom of speech. It’s not clear just what kinds of communication will expose customers to investigation or liability.

If the bill is enacted, MySpace and Facebook could mount a legal challenge in federal court. They could argue that Congress intended to occupy the field of internet communication, and thus pre-empt state law, when it adopted the Communications Decency Act (CDA), 47 U.S.C. § 230(c)(1).

The bill probably violates the Dormant Commerce Clause as well. It would affect interstate commerce by differentially regulating social networking websites. Social networking services outside New Jersey can simply ignore the requirements of state law. Federal courts have consistently struck down these sorts of laws, even when they are designed to protect children.

In my opinion, the proposed legislation projects our worst fears about stalkers and sex predators onto a particular technology—social networking. There are already laws that address harassment and obscenity, and internet service providers are already obliged to cooperate with law enforcement.

Studies suggest that for kids online, education is better than restriction. This is the conclusion of the Internet Safety Technical Task Force of State Attorneys General of the United States, Enhancing Child Safety and Online Technologies. According to another study funded by the MacArthur Foundation, social networking provides benefits, including opportunities for self-directed learning and independence.