After the Brexit vote, politicians, businesses and citizens are all wondering what’s next. In general, legal uncertainty permeates Brexit, but in the world of bits and bytes, Brussels and London have in fact been on a collision course at least since the 90s. The new British prime minister, Theresa May, has been personally responsible for a deepening divide across the North Sea on data and communication policy. Although EU citizens will see stronger privacy and cybersecurity protections through EU law post-Brexit, multinational companies should be particularly worried about how future regulation will treat the loads of data they traffic about customers, employees, and deals between the EU and the UK. [Read more…]
There have been rumors for years that the NSA can decrypt a significant fraction of encrypted Internet traffic. In 2012, James Bamford published an article quoting anonymous former NSA officials stating that the agency had achieved a “computing breakthrough” that gave them “the ability to crack current public encryption.” The Snowden documents also hint at some extraordinary capabilities: they show that NSA has built extensive infrastructure to intercept and decrypt VPN traffic and suggest that the agency can decrypt at least some HTTPS and SSH connections on demand.
However, the documents do not explain how these breakthroughs work, and speculation about possible backdoors or broken algorithms has been rampant in the technical community. Yesterday at ACM CCS, one of the leading security research venues, we and twelve coauthors presented a paper that we think solves this technical mystery.
Today, the vulnerable state of electronic communications security dominates headlines across the globe, while surveillance, money and power increasingly permeate the ‘cybersecurity’ policy arena. With the stakes so high, how should communications security be regulated? Deirdre Mulligan (UC Berkeley), Ashkan Soltani (independent, Washington Post), Ian Brown (Oxford) and Michel van Eeten (TU Delft) weighed in on this proposition at an expert panel on my doctoral project at the Amsterdam Information Influx conference. [Read more…]
[Let’s welcome new CITP blogger Pete Zimmerman, a first-year graduate student in the computer security group at Princeton. — Arvind Narayanan]
Following the revelations of wide-scale surveillance by US intelligence agencies and their allies, a myriad of services offering end-to-end encrypted communications have cropped up to take advantage of the increasing demand for privacy from surveillance. When coupled with anonymity, end-to-end encryption can prevent a central service provider from obtaining any information about its users or their communications. However, maintaining anonymity is difficult while simultaneously offering a straightforward way for users to find each other.
Enter Wickr. This startup offers a simple app featuring “military grade encryption” of text, photo, video, and voice messages as well as anonymous registration for its users. Wickr claims that it cannot identify who has registered with the service or which of its users are communicating with each other. During registration, users enter their email address and/or phone number (non-Wickr IDs). The app utilizes a cryptographic hash function (SHA-256 in this case) to obtain “anonymous” Wickr IDs from the non-Wickr IDs. Wickr IDs are then stored server-side and used for discovery. When your friends want to find you, they enter your phone number or email address, which is then put through the same hash function, resulting in the same output (Wickr ID). Wickr looks this up in its database to determine if you’ve registered with the service to facilitate message exchange. This process simplifies the discovery of other users, supposedly without Wickr having the ability to identify the users of the anonymous service.
The problem here is that while it’s not always possible to determine the input to a hash function given the output, we can leverage the fact that the same input always yields the same output. If the number of possible inputs is small, we can simply try all of them. Unfortunately, this is a recurring theme in a variety of applications as a result of misunderstanding cryptography — specifically, the fact that hash functions are not one-way if the input space is small. A great explanation on the use of cryptographic hash functions in attempts to anonymize data can be found here.
Yesterday we saw two stories that illustrate the limits of cryptography as a shield against government. In San Francisco, police arrested a man alleged to be Dread Pirate Roberts (DPR), the operator of online drug market Silk Road. And in Alexandria, Virginia, a court unsealed documents revealing the tussle between the government and secure email provider Lavabit.
On Monday, Ed wrote about Software Transparency, the idea that software is more resistant to intentional backdoors (and unintentional security vulnerabilities) if the process used to create it is transparent. Elements of software transparency include the availability of source code and the ability to read or contribute to a project’s issue tracker or internal developer discussion. He mentioned a case that I want to discuss in detail: in 2008, the Debian Project (a popular Linux distribution used for many web servers) announced that the pseudorandom number generator in Debian’s version of OpenSSL was broken and insecure.