December 21, 2024

Archives for March 2011

Building a better CA infrastructure

As several Tor project authors, Ben Adida and many others have written, our certificate authority infrastructure has the flaw that any one CA, anywhere on the planet, can issue a certificate for any web site, anywhere else on the planet. This was tolerable when the only game in town was VeriSign, but now that’s just untenable. So what solutions are available?

First, some non-solutions: Extended validation certs do nothing useful. Will users be properly trained to look for the extra changes in browser behavior as to scream when they’re absent via a normal cert? Fat chance. Similarly, certificate revocation lists buy you nothing if you can’t actually download them (a notable issue if you’re stuck behind the firewall of somebody who wants to attack you).

A straightforward idea is to track the certs you see over time and generate a prominent warning if you see something anomalous. This is available as a fully-functioning Firefox extension, Certificate Patrol. This should be built into every browser.

In addition to your first-hand personal observations, why not leverage other resources on the network to make their own observations? For example, while Google is crawling the web, it can easily save SSL/TLS certificates when it sees them, and browsers could use a real-time API much like Google SafeBrowsing. A research group at CMU has already built something like this, which they call a network notary. In essence, you can have multiple network services, running from different vantage points in the network, all telling you whether the cryptographic credentials you got match what others are seeing. Of course, if you’re stuck behind an attacker’s firewall, the attacker will similarly filter out all these sites.

UPDATE: Google is now doing almost exactly what I suggested.

There are a variety of other proposals out there, notably trying to leverage DNSSEC to enhance or supplant the need for SSL/TLS certificates. Since DNSSEC provides more control over your DNS records, it also provides more control over who can issue SSL/TLS certificates for your web site. If and when DNSSEC becomes universally supported, this would be a bit harder for attacker firewalls to filter without breaking everything, so I certainly hope this takes off.

Let’s say that future browsers properly use all of these tricks and can unquestionably determine for you, with perfect accuracy, when you’re getting a bogus connection. Your browser will display an impressive error dialog and refuses to load the web site. Is that sufficient? This will certainly break all the hotel WiFi systems that want to redirect you to an internal site where they can charge you to access the network. (Arguably, this sort of functionality belongs elsewhere in the software stack, such as through IEEE 802.21, notably used to connect AT&T iPhones to the WiFi service at Starbucks.) Beyond that, though, should the browser just steadfastly refuse to allow the connection? I’ve been to at least one organization whose internal WiFi network insists that it proxy all of your https sessions and, in fact, issues fabricated certificates that you’re expected to configure your browser to trust. We need to support that sort of thing when it’s required, but again, it would perhaps best be supported by some kind of side-channel protocol extension, not by doing a deliberate MITM attack on the crypto protocol.

Corner cases aside, what if you’re truly in a hostile environment and your browser has genuinely detected a network adversary? Should the browser refuse the connection, or should there be some other option? And if so, what would that be? Should the browser perhaps allow the connection (with much gnashing of teeth and throbbing red borders on the window)? Should previous cookies and saved state be hidden away? Should web sites like Gmail and Facebook allow users to have two separate passwords, one for “genuine” login and a separate one for “Yes, I’m in a hostile location, but I need to send and receive email in a limited but still useful fashion?”

[Editor’s note: you may also be interested in the many prior posts on this topic by Freedom to Tinker contributors: 1, 2, 3, 4, 5, 6, 7, 8 — as well as the “Emerging Threats to Online Trust: The Role of Public Policy and Browser Certificates” event that CITP hosted in DC last year with policymakers, industry, and activists.]

The case of Prof. Cronon and the FOIA requests for his private emails

Prof. William Cronon, from the University of Wisconsin, started a blog, Scholar as Citizen, wherein he critiqued Republican policies in the State of Wisconsin and elsewhere. I’m going to skip the politics and focus on the fact that the Republicans used Wisconsin’s FOIA mechanism to ask for a wide variety of his emails and they’re likely to get them.

Cronon believes this is a fishing expedition to find material to discredit him and he’s probably correct. He also notes that he scrupulously segregates his non-work-related emails into a private account (perhaps Gmail) while doing his work-related email using his wisc.edu address, as well he should.

What I find fascinating about the Cronon case is that it highlights a threat model for email privacy that doesn’t get much discussion among my professional peers. Sophisticated cryptographic mechanisms don’t protect emails against a FOIA request (or, for that matter, a sufficiently motivated systems administrator).

When I’ve worked in the past with lawyers when our communications weren’t privileged (i.e., opposing counsel would eventually receive every email we ever exchanged), we instead exchanged emails of the form “are you available for a phone call at 2pm?” and not much else. This is annoying when working on a lawsuit and it would completely grind to a halt the regular business of a modern adacemic.

While Cronon doesn’t want to abandon his wisc.edu address, consider the case that he could just forward his email to Gmail and have the university system delete its local copy (which is certainly an option for me with my rice.edu email). At that point, it becomes an interesting legal question of whether a FOIA request can compel production of content from his “private” email service. (And, future lawmaking could well explicitly extend the reach of FOIA to private accounts, particularly when many well-known politicians and others subject to FOIA deliberately conduct their professional business on private servers.)

Here’s another thing to ponder: When I send email from Gmail, it happily forges my rice.edu address in the from line. This allows me to use Gmail without most of the people who correspond with me ever knowing or caring that I’m using Gmail. By blurring the lines between my rice.edu and gmail.com email, am I also blurring the boundary of legal requests to discover my email? Since Rice is a private university, there are presumably no FOIA issues for me, but would it be any different for Prof. Cronon? Could or should present or future FOIA laws compel you to produce content from your “private” email service when you conflate it with your “professional” email address?

Or, leaving FOIA behind for the minute, could or should my employer have any additional privilege to look into my Gmail account when I’m using it for all of my professional emails and forging a rice.edu mail header?

One last alternative: Let’s say I appended some text like this at the bottom on my email:

My personal email is dwallach at gmail.com and my professional email is dwallach at rice.edu. Please use the former for personal matters and the latter for professional matters.

If I go to explicit lengths to separate the two email addresses, using separate services, and making it abundantly clear to all my correspondents which address serves which purpose, could or should that make for a legally significant difference in how FOIA treats my emails?

Do photo IDs help prevent vote fraud?

In many states, an ID is required to vote. The ostensible purpose is to prevent people from casting a ballot for someone else – dead or alive. Historically, it was also used to prevent poor and minority voters, who are less likely to have government IDs, from voting.

No one would (publicly) admit to the second goal today, so the first is always the declared purpose. But does it work?

In my experience as a pollworker in Virginia, the answer is clearly “no”. There are two basic problems – the rules for acceptable IDs are so broad (so as to avoid disenfranchisement) as to be useless, and pollworkers are given no training as to how to verify an ID.

Let’s start with what Virginia law says. The Code of Virginia 24.2-643 reads in part:

An officer of election shall ask the voter for his full name and current residence address and repeat, in a voice audible to party and candidate representatives present, the full name and address stated by the voter. The officer shall ask the voter to present any one of the following forms of identification: his Commonwealth of Virginia voter registration card, his social security card, his valid Virginia driver’s license, or any other identification card issued by a government agency of the Commonwealth, one of its political subdivisions, or the United States; or any valid employee identification card containing a photograph of the voter and issued by an employer of the voter in the ordinary course of the employer’s business. If the voter’s name is found on the pollbook, if he presents one of the forms of identification listed above, if he is qualified to vote in the election, and if no objection is made, […]

Let’s go through these one at a time.

  • A voter registration card has no photo or signature, and little other identifying information, there’s no way to validate it. Since voters don’t sign the pollbook in Virginia (as they do in some other states), there’s no signature to compare to even if it did have a signature. And since the voter card is just a piece of paper with no watermark, it’s easy to fabricate on a laser printer.
  • A Social Security Card (aside from the privacy issues of sharing the voter’s SSN with the pollworker) is usually marked “not for identification”. And it has no photo or address.
  • A Virginia driver’s license has enough information for identification (i.e., a photo and signature, as well as the voter’s address).
  • Other Virginia, locality, or Federal ID. Sounds good, but I have no clue what all the different possible IDs that fall into this category look like, so I have no idea as a pollworker how to tell whether they’re legitimate or not. (On the positive side, a passport is allowed by this clause – but it doesn’t have an address.)
  • Employee ID card. This is the real kicker. There are probably ten thousand employers in my county. Many of them don’t even follow a single standard for employee IDs (my own employer had several versions until earlier this year, when anyone with an old ID was “upgraded”). I don’t know the name of every employer, much less how to distinguish a valid ID from an invalid one. If the voter’s name and photo are on the card, along with some company name or logo, that’s probably good enough. Any address on the card is going to be of the employer, not the voter.

So if I want to commit fraud (a felony) and vote for someone else (living or dead), how hard is it? Simple: create a laminated ID with company name “Bob’s Plumbing Supply” and the name of the voter to be impersonated, memorize the victim’s address, and that’s all it takes.

Virginia law also allows the voter who doesn’t have an ID with him/her to sign an affidavit that they are who they say they are. Falsifying the affidavit is a felony, but it really doesn’t matter if you’re already committing a felony by voting for someone else.

Now let’s say the laws were tightened to require a driver’s license, military ID, or a passport, and no others (and eliminate the affidavit option). Then at least it would be possible to train pollworkers what an ID looks like. But there are still two problems. First, the law says the voter must present the ID, but it never says what the pollworker must do with it. And second, the pollworkers never receive any training in how to verify an ID – a bouncer at a bar gets more training in IDs than a pollworker safeguarding democracy. In Virginia, when renewing a driver’s license the person has the choice to continue to use the previous picture, or to wait in line a couple hours at a DMV site to get a new picture. Not surprisingly, most voters have old pictures. Mine is ten years old, and dates from when I had a full head of hair and a beard, both of which have long since disappeared. Will a pollworker be able to match the IDs? Probably not – but since no one ever tries, that doesn’t matter. And passports are good for 10 years, so the odds are that picture will be quite old too. I’m really bad at matching faces, so when I’m working as a pollworker I don’t even try.

There are some positive things about requiring an ID. Most voters present their drivers license, frequently without even being asked. If the name is complex or the voter has a heavy accent or the room is particularly noisy, or the pollworker is hard of hearing (or not paying close attention), having the written name is a help. But that’s about it.

So what can we learn from this? Photo ID laws for voting, especially those that allow for company ID cards, are almost useless for preventing voting fraud. It’s the threat of felony prosecution, combined with the fact that the vast majority of voters are honest, that prevents vote fraud… not the requirement for a photo ID.

Web Browsers and Comodo Disclose A Successful Certificate Authority Attack, Perhaps From Iran

Today, the public learned of a previously undisclosed compromise of a trusted Certificate Authority — one of the entities that issues certificates attesting to the identity of “secure” web sites. Last week, Comodo quietly issued a command via its certificate revocation servers designed to tell browsers to no longer accept 9 certificates. This is fairly standard practice, and certificates are occasionally revoked for a variety of reasons. What was unique about this case is that it was followed by very rapid updates by several browser manufacturers (Chrome, Firefox, and Internet Explorer) that go above and beyond the normal revocation process and hard-code the certificates into a “do not trust” list. Mozilla went so far as to force this update in as the final change to their source code before shipping their major new release, Firefox 4, yesterday.

This implied that the certificates were likely malicious, and may even been used by a third-party to impersonate secure sites. Here at Freedom to Tinker we have explored the several ways in which the current browser security system is prone to vulnerabilities [1, 2, 3, 4, 5, 6]

Clearly, something exceptional happened behind the scenes. Security hacker Jacob Appelbaum did some fantastic detective work using the EFF’s SSL Observatory data and discovered that all of the certificates in question originated from Comodo — perhaps from one of the many affiliated companies that issues certificates under Comodo’s authority via their “Registration Authority” (RA) program. Evidently, someone had figured out how to successfully attack Comodo or one of their RAs, or had colluded with them in getting some invalid certs.

Appelbaum’s pressure helped motivate a statement from Mozilla [and a follow-up] and a statement from Microsoft that gave a little bit more detail. This afternoon, Comodo released more details about the incident, including the domains for which rogue certificates were issued: mail.google.com, www.google.com, login.yahoo.com (3 certs), login.skype.com, addons.mozilla.org, login.live.com, and “global trustee”. Comodo noted:

“The attack came from several IP addresses, but mainly from Iran.”

and

“this was likely to be a state-driven attack”

[Update: Someone claiming to be the hacker has posted a manifesto. At least one security researcher finds the claim to be credible.]

It is clear that the domains in question are among the most attractive targets for someone who wants to surveil the personal communications of many people online by inserting themselves as a “man in the middle.” I don’t have any deep insights on Comodo’s analysis of the attack’s origins, but it seems plausible. (I should note that although Comodo claims that only one of the certificates was “seen live on the Internet”, their mechanism for detecting this relies on the attacker not taking some basic precautions that would be well within the means and expertise of someone executing this attack.) [update: Jacob Appelbaum also noted this, and has explained the technical details]

What does this tell us about the current security model for web browsing? This instance highlights a few issues:

  • Too many entities have CA powers: As the SSL Observatory project helped demonstrate, there are thousands of entities in the world that have the ability to issue certificates. Some of these are trusted directly by browsers, and others inherit their authority. We don’t even know who many of them are, because such delegation of authority — either via “subordinate certificates” or via “registration authorities” — is not publicly disclosed. The more of these entities exist, the more vulnerabilities exist.
  • The current system does not limit damage: Any entity that can issue a certificate can issue a certificate for any domain in the world. That means that a vulnerability at one point is a vulnerability for all.
  • Governments are a threat: All the major web browsers currently trust many government agencies as Certificate Authorities. This often includes places like Tunisia, Turkey, UAE, and China, which some argue are jurisdictions hostile to free speech. Hardware products exist and are marketed explicitly for government surveillance via a “man in the middle” attack.
  • Comodo in particular has a bad track record with their RA program: The structure of “Registration Authorities” has led to poor or nonexistant validation in the past, but Mozilla and the other browsers have so far refused to take any action to remove Comodo or put them on probation.
  • We need to step up efforts on a fix: Obviously the current state of affairs is not ideal. As Appelbaum notes, efforts like DANE, CAA, HASTLS, and Monkeysphere deserve our attention.

[Update: Jacob Appelbaum has posted his response to the Comodo announcement, criticizing some aspects of their response and the browsers.]

[Update: A few more details are revealed in this Comodo blog post, including the fact that “an attacker obtained the username and password of a Comodo Trusted Partner in Southern Europe.”]

[Update: Mozilla has made Appelbaum’s bug report publicly visible, along with the back-and-forth between him and Mozilla before the situation was made public. There are also some interesting details in the Mozilla bug report that tracked the patch for the certificate blacklist. There is yet another bug that contains the actual certificates that were issued. Discussion about what Mozilla should do in further response to this incident is proceeding in the newsgroup dev.mozilla.security.policy.]

[Update: I talked about this issue on Marketplace Tech Report.]

You may also be interested in an October 22, 2010 event that we hosted on the policy and technology issues related to online trust (streaming video available):


Google Should Stand up for Fair Use in Books Fight

On Tuesday Judge Denny Chen rejected a proposed settlement in the Google Book Search case. My write-up for Ars Technica is here.

The question everyone is asking is what comes next. The conventional wisdom seems to be that the parties will go back to the bargaining table and hammer out a third iteration of the settlement. It’s also possible that the parties will try to appeal the rejection of the current settlement. Still, in case anyone at Google is reading this, I’d like to make a pitch for the third option: litigate!

Google has long been at the forefront of efforts to shape copyright law in ways that encourage innovation. When the authors and publishers first sued Google back in 2005, I was quick to defend the scanning of books under copyright’s fair use doctrine. And I still think that position is correct.

Unfortunately, in 2008 Google saw an opportunity to make a separate truce with the publishing industry that placed Google at the center of the book business and left everyone else out in the cold. Because of the peculiarities of class action law, the settlement would have given Google the legal right to use hundreds of thousands of “orphan” works without actually getting permission from their copyright holders. Competitors who wanted the same deal would have had no realistic way of doing so. Googlers are a smart bunch, and so they took what was obviously a good deal for them even though it was bad for fair use and online innovation.

Now the deal is no longer on the table, and it’s not clear if it can be salvaged. Judge Chin suggested that he might approve a new, “opt-in” settlement. But switching to an opt-in rule would undermine the very thing that made the deal so appealing to Google in the first place: the freedom to incorporate works whose copyright status was unclear. Take that away, and it’s not clear that Google Book Search can exist at all.

Moreover, I think the failure of the settlement may strengthen Google’s fair use argument. Fair use exists as a kind of safety valve for the copyright system, to ensure that it does not damage free speech, innovation, and other values. Although formally speaking judges are supposed to run through the famous four factor test to determine what counts as a fair use, in practice an important factor is whether the judge perceives the defendant as having acted in good faith. Google has now spent three years looking for a way to build its Book Search project using something other than fair use, and come up empty. This underscores the stakes of the fair use fight: if Judge Chen ruled against Google’s fair use argument, it would mean that it was effectively impossible to build a book search engine as comprehensive as the one Google has built. That outcome doesn’t seem consistent with the constitution’s command that copyright promote the progress of science and the useful arts.

In any event, Google may not have much choice. If it signs an “opt-in” settlement with the Author’s Guild and the Association of American Publishers, it’s likely to face a fresh round of lawsuits from other copyright holders who aren’t members of those organizations — and they might not be as willing to settle for a token sum. So if Google thinks its fair use argument is a winner, it might as well test it now before it’s paid out any settlement money. And if it’s not, then this business might be too expensive for Google to be in at all.