April 20, 2014

Dan Wallach

Dan Wallach is an associate professor in the Department of Computer Science at Rice University in Houston, Texas and is the associate director of the NSF's ACCURATE (A Center for Correct, Usable, Reliable, Auditable and Transparent Elections). His research involves computer security and the issues of building secure and robust software systems for the Internet. He has testified about voting security issues before government bodies in the U.S., Mexico, and the European Union, has served as an expert witness in a number of voting technology lawsuits, and recently participated in California's "top-to-bottom" audit of its voting systems.

avatar

Your TV is spying on you, and what you can do about it

A recent UK observer with a packet sniffer noticed that his LG “smart” TV was sending all his viewing habits back to an LG server. This included filenames from an external USB disk. Add this atop observations that Samsung’s 2012-era “smart” TVs were riddled with security holes. (No word yet on the 2013 edition.)

What’s going on here? Mostly it’s just incompetence. Somebody thought it was a good idea to build these TVs with all these features and nobody ever said “maybe we need some security people on the design team to make sure we don’t have a problem”, much less “maybe all this data flowing from the TV to us constitutes a massive violation of our customers’ privacy that will land us in legal hot water.” The deep issue here is that it’s relatively easy to build something that works, but it’s significantly harder to build something that’s secure and respects privacy.
[Read more...]

avatar

Engineering an insider-attack-resistant email system and why you wouldn’t want to use it

Earlier this week, Felten made the observation that the government eavesdropping on Lavabit could be considered as an insider attack against Lavabit users. This leads to the obvious question: how might we design an email system that’s resistant to such an attack? The sad answer is that we’ve had this technology for decades but it never took off. Phil Zimmerman put out PGP in 1991. S/MIME-based PKI email encryption was widely supported by the late 1990′s. So why didn’t it become ubiquitous?
[Read more...]

avatar

Lavabit and how law enforcement access might be done in the future

The saga of Lavabit, the now-closed “secure” mail provider, is an interesting object of study. They’re in the process of appealing a court order to produce their SSL private keys, with which a government eavesdropper would then have access to the entirety of all traffic going in and out of Lavabit. You can read Lavabit’s appeal brief and a general summary of their legal situation. What jumps out is that Lavabit tried to propose an alternative: giving access exclusively to metadata from the target of the investigation. Lavabit’s proposal:

  • The government would pay $3500 for Lavabit’s development costs and operations
  • The operations would provide a variety of email headers on the subject of the investigation, notably excluding the subject line
  • This surveillance data would be sent in daily batches to the government

It appears that the government wasn’t interested in negotiating this, instead going for the whole enchilada, which then led Lavabit to pull the plug on its service. The question I want to pursue is how this whole situation could have happened in a way that would have satisfied the government’s investigative needs without its flagrant violation of the Fourth Amendment prohibition against unreasonable search and seizure. Consider whether Lavabit might have adopted Google’s legal procedures. Google clearly spells out what they’re willing to divulge with a subpoena, court order, or warrant (and nicely defines each of those terms). In Google’s process, the government brings a written search warrant, Google’s legal team reviews it, and then they provide access to the targeted account, providing notice to the affected user when they’re allowed to. Seems reasonable, right?

If all the government needed was real-time traces of specific subjects, that would seem to be a reasonable point of negotiation between them and Lavabit. For the right price, Lavabit could certainly have engineered a solution to their needs. It appears that there wasn’t any serious attempt at negotiation. The government wanted much more than this, creating the dispute. (The Guardian claims that the government also wasn’t willing to pay $3500, calling it unreasonable. It’s hard to stomach that claim, given all the other expenses involved in a major criminal investigation.)

Lavabit used SSL to protect data in transit, and some other crypto derived from the user’s password to protect data on their hard drives. But when the user logs in, the necessary key material is necessarily available to present the data to the user. While users might be able to use stronger cryptographic means to protect their data against legal warrants (e.g., using Thunderbird with the enigmail OpenPGP plugin), ultimately the lesson of Lavabit is that technology cannot alone solve a legal problem. A future Lavabit needs to have its legal processes sorted out in advance, making reasonable promises to its users and making reasonable access available to the government. Likewise, it’s time for Congress to establish some clear limits on government surveillance to prevent unreasonable search and collection practices in the future.

avatar

On the NSA’s capabilities

Last Thursday brought significant new revelations about the capacities of the National Security Agency. While the articles in the New York Times, ProPublica, and The Guardian skirted around technical specifics, several broad themes came out.

  • NSA has the capacity to read significant amounts of encrypted Internet traffic.
  • NSA has some amount of cooperation from vendors to weaken cryptographic aspects of their products.
  • NSA has a stable of exploits designed to break into specifically targeted computers (“tailored access”, in NSA parlance)
  • NSA shares this technology with its counterparts at some of our close allies, apparently the “five eyes” group (USA, Canada, Great Britain, Australia, and New Zealand)

The NSA appears to be taking a holistic approach toward its interception technologies. [Read more...]

avatar

Let’s stop Nigerian scams once and for good

A personal friend of mine’s Yahoo account was recently hacked by a Nigerian scammer. I know this because the email I got (“I’m stuck in the Philippines and need you to wire money”) had an IP address in a “Received” header that pointed squarely at Lagos, Nigeria. The modus operandi of these scammers is well understood. They erased my friend’s address book to make it harder to contact friends and family and alert them. The email they sent out also had a “Reply-To” field that directs subsequent conversations to a Hotmail account of the same username. I bantered back and forth with the scammer, but wasn’t able to accomplish much of interest before Hotmail abuse staff, who I concurrently notified, shut down the account. Now my friend has to clean up the mess left behind by the scammer.
[Read more...]

avatar

Uncertified voting equipment

(Or, why doing the obvious thing to improve voter throughput in Harris County early voting would exacerbate a serious security vulnerability.)

I voted today, using one of the many early voting centers in my county. I waited roughly 35 minutes before reaching a voting machine. Roughly 1/3 of the 40 voting machines at the location were unused while I was watching, so I started thinking about how they could improve their efficiency. If you look at the attached diagram, you’ll see how the polling place is laid out. There are four strings of ten Hart InterCivic eSlate DRE (paperless electronic) voting machines, each connected to a single controller device (JBC) with a local network. These are coded green in the diagram. On the table next to it is a laptop computer, used for verifying voter registration. This has a printer which produces a 2D barcode on a sticker. This is placed on a big sheet of paper, the voter signs it, and then they use the 2D barcode scanner, attached to the JBC, to read the barcode. This is how the JBC knows what ballot style to issue to the voter, avoiding input error on the part of the poll workers. I’ve used the color blue to indicate all of the early voting machinery, not normally present on Election Day (when all of the voters arriving in a given precinct will get the same ballot style).

Poll Diagram
[Read more...]

avatar

IEEE blows it on the Security & Privacy copyright agreement

Last June, I wrote about the decision at the business meeting of IEEE Security & Privacy to adopt the USENIX copyright policy, wherein authors grant a right for the conference to publish the paper and warrant that they actually wrote it, but otherwise the work in question is unquestionably the property of the authors. As I recall, there were only two dissenting votes in a room that was otherwise unanimously in favor of the motion.

Fast forward to the present. The IEEE Security & Privacy program committee, on which I served, has notified the authors of which papers have been accepted or rejected. Final camera-ready copies will be due soon, but we’ve got a twist. They’ve published the new license that authors will be expected to sign. Go read it.

The IEEE’s new “experimental delayed-open-access” licensing agreement for IEEE Security & Privacy goes very much against the vote last year of the S&P business meeting, bearing only a superficial resemblance to the USENIX policy we voted to adopt. While both policies give a period of exclusive distribution rights to the conference (12 months for USENIX, 18 months for IEEE), the devil is in the details.

[Read more...]

avatar

Tinkering with the IEEE and ACM copyright policies

It’s historically been the case that papers published in an IEEE or ACM conference or journal must have their copyrights assigned to the IEEE or ACM, respectively. Most of us were happy with this sort of arrangement, but the new IEEE policy seems to apply more restrictions on this process. Matt Blaze blogged about this issue in particular detail.

The IEEE policy and the comparable ACM policy appear to be focused on creating revenue opportunities for these professional societies. Hypothetically, that income should result in cost savings elsewhere (e.g., lower conference registration fees) or in higher quality member services (e.g., paying the expenses of conference program committee members to attend meetings). In practice, neither of these are true. Regardless, our professional societies work hard to keep a paywall between our papers and their readership. Is this sort of behavior in our best interests? Not really.

What benefits the author of an academic paper? In a word, impact. Papers that are more widely read are more widely influential. Furthermore, widely read papers are more widely cited; citation counts are explicitly considered in hiring, promotion, and tenure cases. Anything that gets in the way of a paper’s impact is something that damages our careers and it’s something we need to fix.

There are three common solutions. First, we ignore the rules and post copies of our work on our personal, laboratory, and/or departmental web pages. Virtually any paper written in the past ten years can be found online, without cost, and conveniently cataloged by sites like Google Scholar. Second, some authors I’ve spoken to will significantly edit the copyright assignment forms before submitting them. Nobody apparently ever notices this. Third, some professional societies, notably the USENIX Association, have changed their rules. The USENIX policy completely inverts the relationship between author and publisher. Authors grant USENIX certain limited and reasonable rights, while the authors retain copyright over their work. USENIX then posts all the papers on its web site, free of charge; authors are free to do the same on their own web sites.

(USENIX ensures that every conference proceedings has a proper ISBN number. Every USENIX paper is just as “published” as a paper in any other conference, even though printed proceedings are long gone.)

Somehow, the sky hasn’t fallen. So far as I know, the USENIX Association’s finances still work just fine. Perhaps it’s marginally more expensive to attend a USENIX conference, but then the service level is also much higher. The USENIX professional staff do things that are normally handled by volunteer labor at other conferences.

This brings me to the vote we had last week at the IEEE Symposium on Security and Privacy (the “Oakland” conference) during the business meeting. We had an unusually high attendance (perhaps 150 out of 400 attendees) as there were a variety of important topics under discussion. We spent maybe 15 minutes talking about the IEEE’s copyright policy and the resolution before the room was should we reject the IEEE copyright policy and adopt the USENIX policy? Ultimately, there were two “no” votes and everybody else voted “yes.” That’s an overwhelming statement.

The question is what happens next. I’m planning to attend ACM CCS this October in Chicago and I expect we can have a similar vote there. I hope similar votes can happen at other IEEE and ACM conferences. Get it on the agenda of your business meetings. Vote early and vote often! I certainly hope the IEEE and ACM agree to follow the will of their membership. If the leadership don’t follow the membership, then we’ve got some more interesting problems that we’ll need to solve.

Sidebar: ACM and IEEE make money by reselling our work, particularly with institutional subscriptions to university libraries and large companies. As an ACM or IEEE member, you also get access to some, but not all, of the online library contents. If you make everything free (as in free beer), removing that revenue source, then you’ve got a budget hole to fill. While I’m no budget wizard, it would make sense for our conference registration fees to support the archival online storage of our papers. Add in some online advertising (example: startup companies, hungry to hire engineers with specialized talents, would pay serious fees for advertisements adjacent to research papers in the relevant areas), and I’ll bet everything would work out just fine.

avatar

Federating the "big four" computer security conferences

Last year, I wrote a report about rebooting the CS publication process (Tinker post, full tech report; an abbreviated version has been accepted to appear as a Communications of the ACM viewpoint article). I talked about how we might handle four different classes of research papers (“top papers” which get in without incident, “bubble papers” which could well have been published if only there was capacity, “second tier” papers which are only of interest to limited communities, and “noncompetitive” papers that have no chance) and I suggested that we need to redesign how we handle our publication process, primarily by adopting something akin to arXiv.org on a massive scale. My essay goes into detail on the benefits and challenges of making this happen.

Of all the related ideas out there, the one I find most attractive is what the database community has done with Proceedings of the VLDB Endowment (see also, their FAQ). In short, if you want to publish a paper in VLDB, one of the top conferences in databases, you must submit your manuscript to the PVLDB. Submissions then go through a journal-like two-round reviewing process. You can submit a paper at any time and you’re promised a response within two months. Accepted papers are published immediately online and are also presented at the next VLDB conference.

I would love to extend the PVLDB idea to the field of computer security scholarship, but this is troublesome when our “big four” security conferences — ISOC NDSS, IEEE Security & Privacy (the “Oakland” conference), USENIX Security, and ACM CCS — are governed by four separate professional societies. Back in the old days (ten years ago?), NDSS and USENIX Security were the places you sent “systems” security work, while Oakland and CCS were where you sent “theoretical” security work. Today, that dichotomy doesn’t really exist any more. You pretty much just send your paper to the conference with next deadline. Pretty much the same community of people serves on each program committee and the same sorts of papers appear at every one of these conferences. (Although USENIX Security and NDSS may well still have a preference for “systems” work, the “theory” bias at Oakland and CCS is gone.)

My new idea: Imagine that we set up the “Federated Proceedings of Computer Security” (representing a federation of the four professional societies in question). It’s a virtual conference, publishing exclusively online, so it has no effective limits on the number of papers it might publish. Manuscripts could be submitted to FPCS with rolling deadlines (let’s say one every three months, just like we have now) and conference-like program committees would be assembled for each deadline. (PVLDB has continuous submissions and publications. We could do that just as well.) Operating like a conference PC, top papers would be accepted rapidly and be “published” with the speed of a normal conference PC process. The “bubble” papers that would otherwise have been rejected by our traditional conference process would now have a chance to be edited and go through a second round of review with the same reviewers. Noncompetitive papers would continue to be rejected, as always.

How would we connect FPCS back to the big four security conferences? Simple: once a paper is accepted for FPCS publication, it would appear at the next of the “big four” conferences. Initially, FPCS would operate concurrently with the regular conference submission process, but it could quickly replace it as well, just as PVLDB quickly became the exclusive mechanism for submitting a paper to VLDB.

One more idea: there’s no reason that FPCS submissions need to be guaranteed a slot in one of the big four security conferences. It’s entirely reasonable that we could increase the acceptance rate at FPCS, and have a second round of winnowing for which papers are presented at our conferences. This could either be designed as a “pull” process, where separate conference program committees pick and choose from the FPCS accepted papers, or it could be designed as a “push” process, where conferences give a number of slots to FPCS, which then decides which papers to “award” with a conference presentation. Either way, any paper that’s not immediately given a conference slot is still published, and any such paper that turns out to be a big hit can always be awarded with a conference presentation, even years after the fact.

This sort of two-tier structure has some nice benefits. Good-but-not-stellar papers get properly published, better papers get recognized as such, the whole process operates with lower latency than our current system. Furthermore, we get many fewer papers going around the submit/reject/revise/resubmit treadmill, thus lowering the workload on successive program committees. It’s full of win.

Of course, there are many complications that would get in the way of making this happen:

  • We need a critical mass to get this off the ground. We could initially roll it out with a subset of the big four, and/or with more widely spaced deadlines, but it would be great if the whole big four bought into the idea all at once.
  • We would need to harmonize things like page length and other formatting requirements, as well as have a unified policy on single vs. double-blind submissions.
  • We would need a suitable copyright policy, perhaps adopting something like the Usenix model where authors retain their copyright while agreeing to allow FPCS (and its constituent conferences) the right to republish their work. ACM and IEEE would require arm-twisting to go along with this.
  • We would need a governance structure for FPCS. That would include a steering committee for selecting the editor/program chairs, but who watches the watchers?
  • What do we do with our journals? FPCS changes our conference process around, but doesn’t touch our journals at all. Of course, the journals could also reinvent themselves, but that’s a separate topic.

In summary, my proposed Federated Proceedings of Computer Security adapts many of the good ideas developed by the database community with their PVLDB. We could adopt it incrementally for only one of the big four conferences or we could go whole hog and try to change all four at once.

Thoughts?

avatar

Building a better CA infrastructure

As several Tor project authors, Ben Adida and many others have written, our certificate authority infrastructure has the flaw that any one CA, anywhere on the planet, can issue a certificate for any web site, anywhere else on the planet. This was tolerable when the only game in town was VeriSign, but now that’s just untenable. So what solutions are available?

First, some non-solutions: Extended validation certs do nothing useful. Will users be properly trained to look for the extra changes in browser behavior as to scream when they’re absent via a normal cert? Fat chance. Similarly, certificate revocation lists buy you nothing if you can’t actually download them (a notable issue if you’re stuck behind the firewall of somebody who wants to attack you).

A straightforward idea is to track the certs you see over time and generate a prominent warning if you see something anomalous. This is available as a fully-functioning Firefox extension, Certificate Patrol. This should be built into every browser.

In addition to your first-hand personal observations, why not leverage other resources on the network to make their own observations? For example, while Google is crawling the web, it can easily save SSL/TLS certificates when it sees them, and browsers could use a real-time API much like Google SafeBrowsing. A research group at CMU has already built something like this, which they call a network notary. In essence, you can have multiple network services, running from different vantage points in the network, all telling you whether the cryptographic credentials you got match what others are seeing. Of course, if you’re stuck behind an attacker’s firewall, the attacker will similarly filter out all these sites.

UPDATE: Google is now doing almost exactly what I suggested.

There are a variety of other proposals out there, notably trying to leverage DNSSEC to enhance or supplant the need for SSL/TLS certificates. Since DNSSEC provides more control over your DNS records, it also provides more control over who can issue SSL/TLS certificates for your web site. If and when DNSSEC becomes universally supported, this would be a bit harder for attacker firewalls to filter without breaking everything, so I certainly hope this takes off.

Let’s say that future browsers properly use all of these tricks and can unquestionably determine for you, with perfect accuracy, when you’re getting a bogus connection. Your browser will display an impressive error dialog and refuses to load the web site. Is that sufficient? This will certainly break all the hotel WiFi systems that want to redirect you to an internal site where they can charge you to access the network. (Arguably, this sort of functionality belongs elsewhere in the software stack, such as through IEEE 802.21, notably used to connect AT&T iPhones to the WiFi service at Starbucks.) Beyond that, though, should the browser just steadfastly refuse to allow the connection? I’ve been to at least one organization whose internal WiFi network insists that it proxy all of your https sessions and, in fact, issues fabricated certificates that you’re expected to configure your browser to trust. We need to support that sort of thing when it’s required, but again, it would perhaps best be supported by some kind of side-channel protocol extension, not by doing a deliberate MITM attack on the crypto protocol.

Corner cases aside, what if you’re truly in a hostile environment and your browser has genuinely detected a network adversary? Should the browser refuse the connection, or should there be some other option? And if so, what would that be? Should the browser perhaps allow the connection (with much gnashing of teeth and throbbing red borders on the window)? Should previous cookies and saved state be hidden away? Should web sites like Gmail and Facebook allow users to have two separate passwords, one for “genuine” login and a separate one for “Yes, I’m in a hostile location, but I need to send and receive email in a limited but still useful fashion?”

[Editor's note: you may also be interested in the many prior posts on this topic by Freedom to Tinker contributors: 1, 2, 3, 4, 5, 6, 7, 8 -- as well as the "Emerging Threats to Online Trust: The Role of Public Policy and Browser Certificates" event that CITP hosted in DC last year with policymakers, industry, and activists.]