December 14, 2024

April 27 Workshop at Princeton CITP: Internet Security, Internet Freedom

On April 27th, the Center for Information Technology Policy is hosting a one-day workshop on campus here at Princeton. We invite you to attend. Here is the summary of the event, called Internet Security, Internet Freedom:

The internet is at once a means for great openness and great control — expression and exclusion. These forces have long been at work online, but have recently come to the fore in debates over the United States’ cyber security policy and its increased focus on “internet freedom.” The country now has a Cybersecurity “czar” that has presented a 12-part national initiative, and also has a Secretary of State who has forcefully stated the case for internet freedom. But what do these principles mean in practice?

This workshop explores how security and freedom both compliment each other and compete. A spectrum of security risks at different layers of the network beg for technical and governance solutions. Flash points like the recent Google-in-China developments highlight the nexus of security and speech. A growing discourse about internet freedom calls out for workable theories and models. This event will bring together technologists, policymakers, and academics to discuss the state of play and viable ways forward.

The keynote speaker will be Alec Ross, Senior Advisor for Innovation in the Office of Secretary of State. Alec will discuss the State Department’s increased focus on the issue of Internet freedom. He recently commented that 2009 was “the worst year in the history of the Internet as it related to Internet freedom.” The panelists feature a variety of experts on issues of online freedom as well as network security.

Please join us. For more information and instructions on how to register, see the workshop page here:
http://citp.princeton.edu/internet-security-internet-freedom/

CITP is a Google Summer of Code 2010 Mentoring Organization

The Google Summer of Code program provides student stipends for summer work on open source projects. CITP is thrilled to have been chosen as a mentoring organization for 2010, meaning that students will be working on some CITP projects this summer. We think that these projects are very interesting, and potential participants now have the opportunity to propose their ideas for what they’d like to work on. Applications accepted from March 29 to April 9.

You can browse our list of project ideas, read our overall description, and apply here.

Round 2 of the PACER Debate: What to Expect

The past year has seen an explosion of interest in free access to the law. Indeed, something of a movement appears to be coalescing around the issue, due in no small part to the growing Law.gov effort (see the latest list of events). One subset of this effort is our work on PACER, the online document access system for the federal courts. We contend that access to electronic court records should be free (see posts from me, Tim, and Harlan). Our RECAP project helps make some of these documents more accessible, and has gained adoption far above our expectations. That being said, RECAP doesn’t solve the fundamental problem: the federal government needs to publish the full public record for free online. Today, this argument came from an unlikely source, the FCC’s National Broadband Plan.

RECOMMENDATION 15.1: the primary legal documents of the federal government should be free and accessible to the public on digital platforms. […]

– For the Judicial branch, this should apply to all judicial opinions.

[…] Finally, all federal judicial decisions should be accessible for free and made publicly available to the people of the United States. Currently, the Public Access to Court Electronic Records system charges for access to federal appellate, district and bankruptcy court records.[7] As a result, U.S. federal courts pay private contractors approximately $150 million per year for electronic access to judicial documents.[8] [Steve note: The correct figure is $150m over 10 years. However it is quite possible that the federal government as a whole spends $150m or more per year for access to case materials.] While the E-Government Act has mandated that this system change so that this information is as freely available as possible, little progress has been made.[9] Congress should consider providing sufficient funds to publish all federal judicial opinions, orders and decisions online in an easily accessible, machine-readable format.

[7] See Public Access To Court Electronic Records—Overview, http://pacer.psc.uscourts.gov/pacerdesc.html (last visited Jan. 7, 2010).
[8] Carl Malmud, President and CEO, Public.Resource. Org., By the People, Address at the Gov 2.0 Summit, Washington, D.C. 25 (Sept. 10, 2009), available at http://resource.org/people/3waves_cover.pdf
[9] See Letter from Sen. Joseph I. Lieberman to Carl Malamud, President and CEO, Public.Resources.Org (Oct. 13, 2009), available at http://bulk.resource.org/courts.gov/foia/gov.senate.lieberman_20091013_from.pdf

This issue is outside of the Commission’s direct jurisdiction, but the Broadband Plan is intended as a blueprint for the federal government as a whole. In that context, the notion of ensuring that primary legal materials are available for free online fits perfectly with a broader effort to make government digitally accessible. In a similar vein, a bill was introduced today by Rep. Israel. The Public Online Information Act, backed by the Sunlight Foundation, creates a new federal advisory committee to advise all three branches of government on how to make government information available online for free.

To establish an advisory committee to issue nonbinding government-wide guidelines on making public information available on the Internet, to require publicly available Government information held by the executive branch to be made available on the Internet, to express the sense of Congress that publicly available information held by the legislative and judicial branches should be available on the Internet, and for other purposes.

These two developments are the first of what I expect to be many announcements in the coming months, coming from places like the transparency caucus. These announcements will share a theme — there is a growing mandate for universal free access to government information, and judicial information is a key component of that mandate. These requirements will increasingly go to the heart of full free access to the public record, and will reveal the discrepancies between different branches in this regard.

The FCC’s language doesn’t quite get everything right. Most notably, the language focuses on opinions even though there are other components of the record that are key to the public’s understanding of the law. Opinions on PACER are already theoretically free, but the kludgy system for accessing them doesn’t include all of the opinions, isn’t indexable by search engines, and only gives a minimal amount of information about the case that each is a part of. Furthermore, the docket text required to understand the context, and the search functionality required to find the opinions both require a fee. Subsequent calls for free access to case materials will have to be more holistic than the opinions-only language of the Broadband Report.

The POIA language is also a step forward. A federal advisory committee is a good thing in the context of a branch that is more accustomed to the adversarial process than notice-and-comment. However, we will need much more concrete requirements before we will have achieved our goals.

In the context of these announcements, the Administrative Office of the Courts made their own announcement today. The Judicial conference has voted in favor of two measures that make incremental improvements on the current pay-wall model of access to PACER.

  • Adjust the Electronic Public Access fee schedule so that users are not billed unless they accrue charges of more than $10 of PACER usage in a quarterly billing cycle, in effect quadrupling the amount of data available without charge. Currently, users are not billed until their accounts total at least $10 in a one-year period.
  • Approve a pilot in up to 12 courts to publish federal district and bankruptcy court opinions via the Government Printing Office’s Federal Digital System (FDsys) so members of the public can more easily search across opinions and across courts.

These are minor tweaks on a fundamentally limited system. Don’t get me wrong — a world with these changes is better than a world without. It is slightly easier to avoid spending more than $10 in a given quarter than in a given year, but it’s nevertheless likely that you will do so unless you know exactly what you are looking for and retrieve only a few documents. It’s also good to establish precedent for GPO publishing case materials, but that doesn’t require a limited trial that could end in bureaucratic quagmire. The GPO can handle publishing many documents, and any reasonably qualified software engineer could figure out how to deliver them in short order. What’s more, the courts could provide universal free public access today, with zero engineering work: offer a single PACER login that is never billed or, better yet, just stop billing all accounts.

The next round of the PACER debate will be over whether or not we make a fundamental change in access to federal court records, or if we concede minor tweaks and call it a day.

Web Security Trust Models

[This is part of a series of posts on this topic: 1, 2, 3, 4, 5, 6, 7, 8.]

Last week, Ed described the current debate over whether Mozilla should allow an organization that is allegedly controlled by the Chinese government to be a default trusted certificate authority. The post prompted some very insightful feedback, including questions about alternative trust models. I will try to lay out the different types of models on a high level, and I encourage corrections or clarifications. It’s worth re-stating that what we’re talking about is how you as a web user know that who you are talking to is who they claim to be (if they are, then you can be confident that your other security measures like end-to-end encryption are working).

Flat and Inflexible
This is the model we use now. Your browser comes pre-loaded with a list of Certificate Authorities that it will trust to guarantee the authenticity of web sites you visit. For instance, Mozilla (represented by the little red dragon in the diagram) ships Firefox with a list of pre-approved CAs. Each browser vendor makes its own list (here is Mozilla’s policy for how to get added). The other major browsers use the same model and have themselves already allowed CNNIC to become trusted for their users. This is a flat model because each CA has just as much authority as the others, thus each effectively sits at the “root” of authority. Indeed any of the CAs can sign certificates for any entity in the world (hence the asterisk in each). They do not coordinate with each other, and can sign a certificate for an entity even if another CA has already done so. Furthermore, they can confer this god-like power on other entities without oversight or the prior knowledge of the end users or the entities being signed for.

This is also an inflexible model because there is no reasonable way to impose finer-grained control on the authority of the CAs. The standard used is called X.509. It doesn’t allow you to trust Verisign to a greater or lesser extent than the Chinese government — it is essentially all or nothing for each. You also can’t tell your browser to trust CNNIC only for sites in China (although domain name constraints do exist in the standard, they are not widely implemented). It is also inflexible because most browsers intentionally make it difficult for a user to change the certificate list. It might be possible to partially mitigate some of the CA/X.509 shortcomings by implementing more constraints, improving the user interface, adding “out of band” certificate checks (like Perspectives), or generating more paranoid certificate warnings (like Certificate Patrol).

Decentralized and Dependent
In the early days of the web, an alternative approach already existed. This model did away entirely with a default set of external trusted entities and gave complete control to the individual. The idea was that you would start by trusting only people you “knew” (smiley faces in the diagram) to begin to build a “web of trust.” You then extend this web by trusting those people to vouch for others that you haven’t met (kind of like a a secure virtual version of Goodfellas). This makes it a fundamentally decentralized model. There is nothing limiting certain entities from gaining the trust of many people and therefore becoming de facto Certificate Authorities. This has only happened within technically proficient communities, and in the case of USENIX they eventually discontinued the service.

So, this is a system that is highly dependent on having some connection with whoever you want to communicate with. It has enjoyed some limited success via the PGP family of standards, but mostly for applications such as email or in more constrained situations like inter/intra-enterprise security. It is possible that with the boon in online social networks there is a new opportunity to renew interest in a web-of-trust style security architecture. The approach seems less practical for general web security because it requires the user to have some existing trust relationship with a site before using it securely. It is not necessarily an impossible approach — and the mod_openpgp and mod_gnutls projects show some technical promise — but as a practical matter wide-scale adoption of a “web of trust” style security model for the web seems unlikely.

Hierarchical and Delegated
A third approach starts with a single highly trusted root and delegates authority recursively. Any authority can only issue certificates for itself or the entities that fall “underneath” it, thus limiting the god-like power of the flat model. This also pushes signing power closer to the authenticated sites themselves. It is possible that this authority could be placed directly in their hands, rather than requiring an external authority to approve of each new certificate or domain. Note that I am describing this in a very domain-centric way. If we are willing to fully buy into the domain hierarchy way of thinking about web security, there may be a viable implementation path for this model.

Perhaps the greatest example of this delegation approach to web governance is the existing Domain Name System. Decisions at the root of DNS are governed by the international non-profit ICANN, which assigns authority to Top Level Domains (eg: .com, .net, .cn) who then further delegate through a system of registrars. The biggest problem with tying site authentication to DNS is that DNS is deeply insecure. However, within the next year a more secure version of DNS, DNSSEC, is scheduled to be deployed at the DNS root. Any DNSSEC query can be verified by following the chain of authority back to the root, and any contents of the response can be guaranteed to be unaltered through that chain of trust. The question is whether this infrastructure can be the basis for distributing site certificates as well, which could form the basis for hierarchical site authenticity (which would also permit encryption of traffic). CNNIC happens to also be the registry for the .cn TLD, so in this case it would be restricted to creating certificates for .cn domains. This approach is advocated by Dan Kaminsky (interview, presentation) and Paul Vixie (here, here). I’ve also found posts by Eric Rescorla and Jason Roysdon informative.

If implemented via DNSSEC, this approach would thoroughly bind web site authentication to the DNS hierarchy, and the only assurance it would provide is that you are communicating with the person who registered the domain you are visiting. It would not provide any additional verification about who that person is, as Certificate Authorities theoretically could do (but practically don’t). Certificates were originally envisioned as a way to guarantee that a particular real-world entity was behind the site in question, but market pressures caused CAs cut corners on the verification process. Most CAs now offer “Domain Validation” (DV) certificates that are issued without any human intervention and simply verify that the person requesting the certificate has control of the domain in question. These certificates are treated no differently than more rigorously verified certificates, so for all intents and purposes the DNSSEC certificate delegation model would provide at least the services of the current CA model. One exception is Extended Validation certificates, which require the CA to perform more rigorous checks and cause the browser URL bar to take on a “green glow”. It should hover be noted that there are some security flaws with the current implementation.

[Update: I discuss the DNSSEC approach in more detail here]

Open Questions
Are there appropriate stopgap measures on the existing CA model that can limit authority of certain political entities? Are there viable user interface improvements? Are users aware enough of these issues to do anything meaningful with more information about certificates? Does the hierarchical model force us to trust ICANN, and do we? Does the DNS hierarchy appropriately allocate authority? Is domain name enough of a proxy for identity that a DNS-based system makes sense? Do we need better ways of independently validating a person’s identity and binding that to their public key? Even if an alternative model is better, how do we motivate adoption?

Android Open Source Model Has a Short Circuit

[Update: Google subsequently worked out a mechanism that allows Cyanogen and others to distribute their mods separate from the Google Apps.]

Last year, Google entered the mobile phone market with a Linux-based mobile operating system. The company brought together device manufacturers and carriers in the Open Handset Alliance, explaining that, “Together we have developed Android™, the first complete, open, and free mobile platform.” There has been considerable engagement from the open source developer community, as well as significant uptake from consumers. Android may have even been instrumental in motivating competing open platforms like LiMo. In addition to the underlying open source operating system, Google chose to package essential (but proprietary) applications with Android-based handsets. These applications include most of the things that make the handsets useful (including basic functions to sync with the data network). This two-tier system of rights has created a minor controversy.

A group of smart open source developers created a modified version of the Android+Apps package, called Cyanogen. It incorporated many useful and performance-enhancing updates to the Android OS, and included unchanged versions of the proprietary Apps. If Cyanogen hadn’t included the Apps, the package would have been essentially useless, given that Google doesn’t appear to provide a means to install the Apps on a device that has only a basic OS. As Cyanogen gained popularity, Google decided that it could no longer watch the project distribute their copyright-protected works. The lawyers at Google decided that they needed to send a Cease & Desist letter to the Cyanogen developer, which caused him to take the files off of his site and spurred backlash from the developer community.

Android represents a careful balance on the part of Google, in which the company seeks to foster open platforms but maintain control over its proprietary (but free) services. Google has stated as much, in response to the current debate. Android is an exciting alternative to the largely closed-source model that has dominated the mobile market to date. Google closely integrated their Apps with the operating system in a way that makes for a tremendously useful platform, but in doing so hampered the ability of third-party developers to fully contribute to the system. Perhaps the problem is simply that they did not choose the right location to draw the line between open vs. closed source — or free-to-distribute vs. not.

The latter distinction might offer a way out of the conundrum. Google could certainly grant blanket rights to third-parties to redistribute unchanged versions of their Apps. This might compromise their ability to make certain business arrangements with carriers or handset providers in which they package the software for a fee. That may or may not be worth it from their business perspective, but they could have trouble making the claim that Android is a “complete, open, and free mobile platform” if they don’t find a way to make it work for developers.

This all takes place in the context of a larger debate over the extent to which mobile platforms should be open — voluntarily or via regulatory mandate. Google and Apple have been arguing via letters to the FCC about whether or not Apple should allow the Google Voice application in the iPhone App Store. However, it is yet to be determined whether the Commission has the jurisdiction and political will to do anything about the issue. There is a fascinating sideshow in that particular dispute, in which AT&T has made the very novel claim that Google Voice violates network neutrality (well, either that or common carriage — they’ll take whichever argument they can win). Google has replied. This is a topic for another day, but suffice to say the clear regulatory distinctions between telephone networks, broadband, and devices have become muddied.

(Cross-posted to Managing Miracles)