November 22, 2024

Hot Custom Car (software?)

I’ve found Tim’s bits on life post-driving interesting. I’ve sometimes got a one-track mind, though- so what I really want to know is if I’ll be able to hack on that self-driving car. I mentioned this to Tim, and he said he wasn’t sure either- so here is my crack at it.

We’re not very good at making choices like this. Historically, liability constrained software development at large institutions (the airlines had a lot of reasons not to let people to hack on their airplanes) and benign neglect was sufficient to regulate hacking of personal software- if you hacked your PC or toaster, no one cared because it had no impact (a form of Lessig’s regulation by architecture). The net result was that we didn’t need to regulate software very much, we got lots of innovation from individual developers, and we stayed bad at making choices like ‘how should we regulate people’s ability to hack?’

Individuals are now beginning to own hackable devices that can also harm the neighbors, though, so the space in between large institution and isolated hacker is filling up. For example, the FCC regulates your ability to modify your own wireless devices, so that you can’t interfere with other people’s spectrum. And some of Prof. Jonathan Zittrain’s analysis suggests that we might want to even regulate PCs, since they can now frequently be vectors for spam and viruses. Tim and I are normally fairly anti-regulation, and pro-open source, but even we are aware that cars running around all over the place driven by by potentially untested code might also fit in this gap- and be more worthy of regulation.

So what should happen? Should we be able to hack our cars (more than we already do), and if so, under what conditions?

It’d help if we could better measure the risks and benefits involved. Unfortunately, probably because we regulate software so rarely, our metrics for assessing the risks and benefits of software development aren’t very good. One such metric is Prof. Zittrain’s ‘generativity’; Dan Wallach’s proposal to measure the ‘O(n)’ of potential system damage is another. Neither are perfect fits here, but that only confirms that we need more such tools in our software policy toolkit.

This lack of tools shouldn’t stop us from some basic, common-sense analysis, though. On the pro side, the standard arguments for open source apply, though perhaps not as strongly as usual, since many casual hackers might be discouraged at the thought of hacking their own car. We probably would want car manufacturers to pool their safety expertise, which would be facilitated by openness. Finally, we might also want open code for auditing reasons- with millions of lives on the line, this seems like a textbook case for wanting ‘many eyes‘ to take a look at the code.

If we accept these arguments on the ‘pro’ hacking side, what then? First, we could require that the car manufacturers use test-driven development, and share those tests with the public- perhaps even allowing the public to add new tests. This would help avoid serious safety problems in the ‘original’ code, and home hackers might be blocked from loading new code into their cars unless the code was certified to have passed the tests. Second, we could treat the consequences very seriously- ‘driving’ with bad code could be treated similarly to DUI. Third, we could make sure that the safety fallbacks (emergency brake systems, etc.) are in separate, redundant (and perhaps only mechanical?) unhackable systems. Having such systems is going to be good engineering whether the code is open or not, and making them unhackable might be a good compromise. (Serious engineers, instead of compsci BAs now in law school, should feel free to suggest other techniques in the comments.)

Bottom line? First, we don’t really know- we just have pretty poor analytical tools for this type of problem. But if we take a stab at it, we can see that there are some potential solutions that might be able to harness the innovation and generativity of open source in our cars without significantly compromising our safety. At least, not compromising it any moreso than the already crazy core idea 🙂

[picture is ‘Car Show 2‘, by starmist1, used under the CC-BY license.]

How Yahoo could have protected Palin's email

Last week I criticized Yahoo for their insecure password recovery mechanism that allowed an intruder to take control of Sarah Palin’s email account. Several readers asked me the obvious follow-up question: What should Yahoo have done instead?

Before we discuss alternatives, let’s take a minute to appreciate the delicate balance involved in designing a password recovery mechanism for a free, mass-market web service. On the one hand, users lose their passwords all the time; they generally refuse to take precautions in advance against a lost password; and they won’t accept being locked out of their own accounts because of a lost password. On the other hand, password recovery is an obvious vector for attack — and one exploited at large scale, every day, by spammers and other troublemakers.

Password recovery is especially challenging for email accounts. A common approach to password recovery is to email a new password (or a unique recovery URL) to the user, which works nicely if the user has a stable email address outside the service — but there’s no point in sending email to a user who has lost the password to his only email account.

Still, Yahoo could be doing more to protect their users’ passwords. They could allow users to make up their own security questions, rather than offering only a fixed set of questions. They could warn users that security questions are a security risk and that users with stable external email addresses might be better off disabling the security-question functionality and relying instead on email for password recovery.

Yahoo could also have followed Gmail’s lead, and disabled the security-question mechanism unless no logged-in user had accessed the account for five days. This clever trick prevents password “recovery” when there is evidence that somebody who knows the password is actively using the account. If the legitimate user loses the password and doesn’t have an alternative email account, he has to wait five days before recovering the password, but this seems like a small price to pay for the extra security.

Finally, Yahoo would have been wise, at least from a public-relations standpoint, to give extra protection to high-profile accounts like Palin’s. The existence of these accounts, and even the email addresses, had already been published online. And the account signup at Yahoo asks for a name and postal code so Yahoo could have recognized that this suddenly-important public figure had an account on their system. (It seems unlikely that Palin gave a false name or postal code in signing up for the account.) Given the public allegations that Palin had used her Yahoo email accounts for state business, these accounts would have been obvious targets for freelance “investigators”.

Some commenters on my previous post argued that all of this is Palin’s fault for using a Yahoo mail account for Alaska state business. As I understand it, the breached account included some state business emails along with some private email. I’ll agree that it was unwise for Palin to put official state email into a Yahoo account, for security reasons alone, not to mention the state rules or laws against doing so. But this doesn’t justify the break-in, and I think anyone would agree that it doesn’t justify publishing non-incriminating private emails taken from the account.

Indeed, the feeding frenzy to grab and publish private material from the account, after the intruder had published the password, is perhaps the ugliest aspect of the whole incident. I don’t know how many people participated — and I’m glad that at least one Good Samaritan tried to re-lock the account — but I hope the republishers get at least a scary visit from the FBI. It looks like the FBI is closing in on the initial intruder. I assume he is facing a bigger punishment.

Palin's email breached through weak Yahoo password recovery mechanism

This week’s breach of Sarah Palin’s Yahoo Mail account has been much discussed. One aspect that has gotten less attention is how the breach occurred, and what it tells us about security and online behavior.

(My understanding of the facts is based on press stories, and on reading a forum post written by somebody claiming to be the perpetrator. I’m assuming the accuracy of the forum post, so take this with an appropriate grain of salt.)

The attacker apparently got access to the account by using Yahoo’s password reset mechanism, that is, by following the same steps Palin would have followed had she forgotten her own password.

Yahoo’s password reset mechanism is surprisingly weak and easily attacked. To simulate the attack on Palin, I performed the same “attack” on a friend’s account (with the friend’s permission, of course). As far as I know, I followed the same steps that the Palin attacker did.

First, I went to Yahoo’s web site and said I had forgotten my password. It asked me to enter my email address. I entered my friend’s address. It then gave me the option of emailing a new password to my friend’s alternate email address, or doing an immediate password reset on the site. I chose the latter. Yahoo then prompted me with my friend’s security question, which my friend had previously chosen from a list of questions provided by Yahoo. It took me six guesses to get the right answer. Next, Yahoo asked me to confirm my friend’s country of residence and zip code — it displayed the correct values, and I just had to confirm that they were correct. That’s all! The next step had me enter a new password for my friend’s account, which would have allowed me to access the account at will.

The only real security mechanism here is the security question, and it’s often easy to guess the right answer, especially given several tries. Reportedly, Palin’s question was “Where did you meet your spouse?” and the correct answer was “Wasilla high”. Wikipedia says that Palin attended Wasilla High School and met her husband-to-be in high school, so “Wasilla high” is an easy guess.

This attack was not exactly rocket science. Contrary to some news reports, the attacker did not display any particular technical prowess, though he did display stupidity, ethical blindness, and disrespect for the law — for which he will presumably be punished.

Password recovery is often the weakest link in password-based security, but it’s still surprising that Yahoo’s recovery scheme was so weak. In Yahoo’s defense, it’s hard to verify that somebody is really the original account holder when you don’t have much information about who the original account holder is. It’s not like Sarah Palin registered for the email account by showing up at a Yahoo office with three forms of ID. All Yahoo knows is that the original account holder claimed to have the name Sarah Palin, claimed to have been born on a particular date and to live in a particular zip code, and claimed to have met his/her spouse at “Wasilla high”. Since this information was all in the public record, Yahoo really had no way to be sure who the account holder was — so it might have seemed reasonable to give access to somebody who showed up later claiming to have the same name, email address, and spouse-meeting place.

Still, we shouldn’t let Yahoo off the hook completely. Millions of Yahoo customers who are not security experts (or are security experts but want to delegate security decisions to someone else) entrusted the security of their email accounts to Yahoo on the assumption that Yahoo would provide reasonable security. Palin probably made this assumption, and Yahoo let her down.

If there’s a silver lining in this ugly incident, it is the possibility that Yahoo and other sites will rethink their password recovery mechanisms, and that users will think more carefully about the risk of email breaches.

It can be rational to sell your private information cheaply, even if you value privacy

One of the standard claims about privacy is that people say they value their privacy but behave as if they don’t value it. The standard example involves people trading away private information for something of relatively little value. This argument is often put forth to rebut the notion that privacy is an important policy value. Alternatively, it is posed as a “what could they be thinking” puzzle.

I used to be impressed by this argument, but lately I have come to doubt its power. Let me explain why.

Suppose you offer to buy a piece of information about me, such as my location at this moment. I’ll accept the offer if the payment you offer me is more than the harm I would experience due to disclosing the information. What matters here is the marginal harm, defined as amount of privacy-goodness I would have if I withheld the information, minus the amount I would have if I disclosed it.

The key word here is marginal. If I assume that my life would be utterly private, unless I gave this one piece of information to you, then I might require a high price from you. But if I assume that I have very little privacy to start with, then selling this one piece of information to you makes little difference, and I might as well sell it cheaply. Indeed, the more I assume that my privacy is lost no matter what I do, the lower a price I’ll demand from you. In the limit, where I expect you can get the information for free elsewhere even if I withhold if from you, I’ll be willing to sell you the information for a penny.

Viewed this way, the price I charge you tells you at least as much about how well I think my privacy is protected, as it does about how badly I want to keep my location private. So the answer to “what could they be thinking” is “they could be thinking they have no privacy in the first place”.

And in case you’re wondering: At this moment, I’m sitting in my office at Princeton.

Transit Card Maker Sues Dutch University to Block Paper

NXP, which makes the Mifare transit cards used in several countries, has sued Radboud University Nijmegen (in the Netherlands), to block publication of a research paper, “A Practical Attack on the MIFARE Classic,” that is scheduled for publication at the ESORICS security conference in October. The new paper reportedly shows fatal security flaws in NXP’s Mifare Classic, which appears to be the world’s most commonly used contactless smartcard.

I wrote back in January about the flaws found by previous studies of Mifare. After the previous studies, there wasn’t much left to attack in Mifare Classic. The new paper, if its claims are correct, shows that it’s fairly easy to defeat MIFARE Classic completely.

It’s not clear what legal argument NXP is giving for trying to suppress the paper. There was a court hearing last week in Arnheim, but I haven’t seen any reports in the English-language press. Perhaps a Dutch-speaking reader can fill in more details. An NXP spokesman has called the paper “irresponsible” but that assertion is hardly a legal justification for censoring the paper.

Predictably, a document purporting to be the censored paper showed up on Wikileaks, and BoingBoing linked to it. Then, for some reason, it disappeared from Wikileaks, though BoingBoing commenters quickly pointed out that it was still available in Google’s cache of Wikileaks, and also at Cryptome. But why go to a leak-site? The same article has been available on the Web all along at arxiv, a popular repository of sci/tech research preprints run by the Cornell University library.

[UPDATE (July 15): It appears that Wikileaks had the wrong paper, though one that came from the same Radboud group. The censored paper is called “Dismantling Mifare Classic”.]

As usual in these cases of censorship-by-lawsuit, it’s hard to see what NXP is trying to achieve with the suit. The research is already done and peer-reviewed,. The suit will only broaden the paper’s readership. NXP’s approach will alienate the research community. The previous Radboud paper already criticizes NXP’s approach, in a paragraph written before the lawsuit:

We would like to stress that we notified NXP of our findings before publishing our results. Moreover, we gave them the opportunity to discuss with us how to publish our results without damaging their (and their customers) immediate interests. They did not take advantage of this offer.

What is really puzzling here is that the paper is not a huge advance over what has already been published. People following the literature on Mifare Classic – a larger group, thanks to the lawsuit – already know that the system is unsound. Had NXP reacted responsibly to this previous work, admitting the Mifare Classic problems and getting to work on migrating customers to newer, more secure products, none of this would have been necessary.

You’ve got to wonder what NXP was thinking. The lawsuit is almost certain to backfire: it will only boost the audience of the censored paper and of other papers criticizing Mifare Classic. Perhaps some executive got angry and wanted to sue the university out of spite. Things can’t be comfortable in the executive suite: NXP’s failure to get in front of the Mifare Classic problems will (rightly) erode customers’ trust in the company and its products.

UPDATE (July 18): The court ruled against NXP, so the researchers are free to publish. See Mrten’s comment below.