May 27, 2015

avatar

Why Your Netflix Traffic is Slow, and Why the Open Internet Order Won’t (Necessarily) Make It Faster

The FCC recently released the Open Internet Order, which has much to say about “net neutrality” whether (and in what circumstances) an Internet service provider is permitted to prioritize traffic. I’ll leave more detailed thoughts on the order itself to future posts; in this post, I would like to clarify what seems to be a fairly widespread misconception about the sources of Internet congestion, and why “net neutrality” has very little to do with the performance problems between Netflix and consumer ISPs such as Comcast.

Much of the popular media has led consumers to believe that the reason that certain Internet traffic—specifically, Netflix video streams—were experiencing poor performance because Internet service providers are explicitly slowing down Internet traffic. John Oliver accuses Comcast of intentionally slowing down Netflix traffic (an Oatmeal cartoon reiterates this claim). These caricatures are false, and they demonstrate a fundamental misunderstanding of how Internet connectivity works, what led to the congestion in the first place, and the economics of how the problems were ultimately resolved.
[Read more…]

avatar

Security flaw in New South Wales puts thousands of online votes at risk

Update April 26: The technical paper is now available

Update Mar. 23 1:30 PM AEDT: Our response to the NSWEC’s response

New South Wales, Australia, is holding state elections this month, and they’re offering a new Internet voting system developed by e-voting vendor Scytl and the NSW Electoral Commission. The iVote system, which its creators describe as private, secure and verifiable, is predicted to see record turnout for online voting. Voting has been happening for six days, and already iVote has received more than 66,000 votes. Up to a quarter million voters (about 5% of the total) are expected to use the system by the time voting closes next Saturday.

Since we’ve both done extensive research on the design and analysis of Internet voting systems, we decided to perform an independent security review of iVote. We’ll prepare a more extensive technical report after the election, but we’re writing today to share news about critical vulnerabilities we found that have put tens of thousands of votes at risk. We discovered a major security hole allowing a man-in-the middle attacker to read and manipulate votes. We also believe there are ways to circumvent the verification mechanism.

[Read more…]

avatar

What should we do about re-identification? A precautionary approach to big data privacy

Computer science research on re-identification has repeatedly demonstrated that sensitive information can be inferred even from de-identified data in a wide variety of domains. This has posed a vexing problem for practitioners and policy makers. If the absence of “personally identifying information” cannot be relied on for privacy protection, what are the alternatives? Joanna Huey, Ed Felten, and I tackle this question in a new paper “A Precautionary Approach to Big Data Privacy”. Joanna presented the paper at the Computers, Privacy & Data Protection conference earlier this year.

[Read more…]

avatar

On compromising app developers to go after their users

In a recent article by Scahill and Begley, we learned that the CIA is interested in targeting Apple products. I largely agree with the quote from Steve Bellovin, that “spies gonna spy”, so of course they’re interested in targeting the platform that rides in the pockets of many of their intelligence collection targets. What could be a tastier platform for intelligence collection than a device with a microphone, cellular network connection, GPS, and a battery, which your targets willingly carry around in their pockets? Even better, your targets will spare you the trouble of recharging your spying device for you. Of course you target their iPhones! (And Androids. And Blackberries.)

To my mind, the real eyebrow raising moment was that the CIA is also allegedly targeting app developers through “whacking” Apple’s Xcode tool, presumably allowing all subsequent software shipped from the developer to the app store to contain some sort of malicious implant, which will then be distributed within that developer’s app. Nothing has been disclosed about how widespread these attacks are (if ever used at all), what developers might have been targeted, or how the implants might function.
[Read more…]

avatar

Threshold signatures for Bitcoin wallets are finally here

Today we are pleased to release our paper presenting a new ECDSA threshold signature scheme that is particularly well-suited for securing Bitcoin wallets. We teamed up with cryptographer Rosario Gennaro to build this scheme. Threshold signatures can be thought of as “stealth multi-signatures.” [Read more…]

avatar

FREAK Attack: The Chickens of ‘90s Crypto Restriction Come Home to Roost

Today researchers disclosed a new security flaw in TLS/SSL, the protocol used to secure web connections. The flaw is significant in itself, but it is also a good example of what can go wrong when government asks to build weaknesses into security systems.

Back in the early 1990s, it was illegal to export most products from the U.S. if they had strong cryptography. To be exportable, a system had to use small keys that could be defeated by a brute-force search over the (reduced) key space. Because of this, the secure web protocol, SSL, was designed to allow either party to a communication to ask to use a special export mode. [Note for crypto geeks: “export mode” refers to certain cipher suites whose names start with “EXP”.] When it became legal to export strong crypto, the export mode feature was not removed from the protocol because some software still depended on it. Export mode is still an option today.

This creates the possibility that a network “man in the middle” (MITM) can downgrade the security of a connection. If Alice and Bob are setting up a connection, the MITM can tell Alice that Bob is asking for export mode, and vice versa. This kind of “downgrade attack” is well known, and the TLS/SSL protocol has features designed to detect it. In this case, for complicated reasons beyond the scope of this post, the anti-downgrade protections could be evaded by a clever MITM.

Having tricked Alice and Bob into using export mode, an adversary could then crack the 512-bit RSA keys used in this mode. Back in the ‘90s that would have required a heavy-duty computation, but today it takes about 7 hours on Amazon EC2 and costs about $100.

Many web sites are vulnerable to this attack, allowing an adversary in the network to spoof or spy on traffic to vulnerable sites. About 12% of popular sites appear to be vulnerable, including americanexpress.com, groupon.com, bloomberg.com, kohls.com, marriott.com, and usajobs.gov.

Even the National Security Agency’s own site is vulnerable. That’s not a big national security problem in itself because NSA doesn’t distribute state secrets from its public site. But there is an important lesson here about the consequences of crypto policy decisions: the NSA’s actions in the ‘90s to weaken exportable cryptography boomeranged on the agency, undermining the security of its own site twenty years later.

Next time you hear a government official ask to modify a security system to protect their own access to data, ask yourself: What are the side effects? How do we know we won’t regret this later?

avatar

A clear line between offense and defense

The New York Times, in an editorial today entitled “Arms Control for a Cyberage“, writes,

The problem is that unlike conventional weapons, with cyberweapons “there’s no clear line between offense and defense,” as President Obama noted this month in an interview with Re/code, a technology news publication. Defense in cyberwarfare consists of pre-emptively locating the enemy’s weakness, which means getting into its networks.

This is simply wrong.
[Read more…]

avatar

We can de-anonymize programmers from coding style. What are the implications?

In a recent post, I talked about our paper showing how to identify anonymous programmers from their coding styles. We used a combination of lexical features (e.g., variable name choices), layout features (e.g., spacing), and syntactic features (i.e., grammatical structure of source code) to represent programmers’ coding styles. The previous post focused on the overall results and techniques we used. Today I’ll talk about applications and explain how source code authorship attribution can be used in software forensics, plagiarism detection, copyright or copyleft investigations, and other domains.

[Read more…]

avatar

Lenovo Pays For Careless Product Decisions

The discovery last week that Lenovo laptops had been shipping with preinstalled adware that left users wide open to security exploitation triggered a lot of righteous anger in the tech community. David Auerbach at Slate wrote that Lenovo had “betrayed its customers and sold out their security”. Whenever a big company does something so monumentally foolish, it’s worth stepping back and asking how this could have happened.
[Read more…]

avatar

In Partial Defense of the Seahawks’ Play Calling

The conventional wisdom about last night’s Super Bowl is that the Seahawks made a game-losing mistake by running a passing play from the Patriots’ one yard line in the closing seconds. Some are calling it the worst Super Bowl play call ever.

I disagree. I won’t claim it was the right call, but I do think it was reasonable. Let me explain why.

To analyze the decision we have to put ourselves in the shoes of the Seahawks’ coaches at the time. They did not know that an opposing defender would make a spectacular interception. They knew that was possible—and needed to take it into account—but a fair analysis of the decision can’t use the hindsight knowledge we have now.

With that established, let’s make a simple model of the Seahawks’ strategic choices. They needed a touchdown to win. It was second down, so they could run three plays. The clock was running down, so let’s assume that if they run two running plays, the clock will expire before they can get a third play off; but an incomplete pass on the first or second play will stop the clock and give them time to run a third play. There are three play sequences they can use: run-run, pass-run-run, run-pass-run. (Passing more than once is bad strategy.)

Suppose that a run play with Marshawn Lynch scores 85% of the time, and gets stuffed at the line 15% of the time. If you run twice, there is a 2.25% chance you’ll get stuffed twice, so you win the game with 97.75% probability.

Suppose that passing on second down has these results: score: 50%, incomplete: 49%, interception: 1%. So if you call the pass-run-run sequence, the game outcome probabilities are: score: 97.90%, stopped short: 1.10%, interception: 1%. The odds of winning are a tiny bit better than if you just ran twice.

It’s counterintuitive that passing might be the right choice even though a running play is more likely to score. The reason it comes out this way is that you’re not passing instead of running, you’re passing because passing gets you an extra play and you can still try to run twice, absent a spectacular interception play by the opponent.

Now you can quibble with these probability estimates; and you can argue that the Seahawks might have had time to do three run plays. Change these assumptions, and the strategic calculations are different. But the argument so far should establish that the Seahawks weren’t crazy to pass.

The real kicker comes, though, when we consider the remaining option of run-pass-run. If the outcomes of a pass are still 50/49/1 on third down, then run-pass-run is a clear winner. But maybe a pass comes as less of a surprise on third down, so the outcomes of a pass might be worse. Even so, run-pass-run turns out to be the best strategy. For example, if the outcomes of a third-down pass are score: 25%, incomplete: 73%, interception: 2%, the run-pass-run strategy still scores 98.06% of the time, which is better than either of the other options.

The conclusion that run-pass-run is the best sequence is fairly robust against changes in the probability assumptions. If it’s wrong, it’s probably because of the assumption that run-run-run isn’t an option.

The Seahawks’ decision to pass on second down wasn’t crazy, but a better choice would have been to pass on third down. Announcers who said “just run twice” were giving bad advice. The Seahawks didn’t make a terrible play call; they made a reasonable choice but were defeated by a great defensive play.