November 24, 2024

Palin's email breached through weak Yahoo password recovery mechanism

This week’s breach of Sarah Palin’s Yahoo Mail account has been much discussed. One aspect that has gotten less attention is how the breach occurred, and what it tells us about security and online behavior.

(My understanding of the facts is based on press stories, and on reading a forum post written by somebody claiming to be the perpetrator. I’m assuming the accuracy of the forum post, so take this with an appropriate grain of salt.)

The attacker apparently got access to the account by using Yahoo’s password reset mechanism, that is, by following the same steps Palin would have followed had she forgotten her own password.

Yahoo’s password reset mechanism is surprisingly weak and easily attacked. To simulate the attack on Palin, I performed the same “attack” on a friend’s account (with the friend’s permission, of course). As far as I know, I followed the same steps that the Palin attacker did.

First, I went to Yahoo’s web site and said I had forgotten my password. It asked me to enter my email address. I entered my friend’s address. It then gave me the option of emailing a new password to my friend’s alternate email address, or doing an immediate password reset on the site. I chose the latter. Yahoo then prompted me with my friend’s security question, which my friend had previously chosen from a list of questions provided by Yahoo. It took me six guesses to get the right answer. Next, Yahoo asked me to confirm my friend’s country of residence and zip code — it displayed the correct values, and I just had to confirm that they were correct. That’s all! The next step had me enter a new password for my friend’s account, which would have allowed me to access the account at will.

The only real security mechanism here is the security question, and it’s often easy to guess the right answer, especially given several tries. Reportedly, Palin’s question was “Where did you meet your spouse?” and the correct answer was “Wasilla high”. Wikipedia says that Palin attended Wasilla High School and met her husband-to-be in high school, so “Wasilla high” is an easy guess.

This attack was not exactly rocket science. Contrary to some news reports, the attacker did not display any particular technical prowess, though he did display stupidity, ethical blindness, and disrespect for the law — for which he will presumably be punished.

Password recovery is often the weakest link in password-based security, but it’s still surprising that Yahoo’s recovery scheme was so weak. In Yahoo’s defense, it’s hard to verify that somebody is really the original account holder when you don’t have much information about who the original account holder is. It’s not like Sarah Palin registered for the email account by showing up at a Yahoo office with three forms of ID. All Yahoo knows is that the original account holder claimed to have the name Sarah Palin, claimed to have been born on a particular date and to live in a particular zip code, and claimed to have met his/her spouse at “Wasilla high”. Since this information was all in the public record, Yahoo really had no way to be sure who the account holder was — so it might have seemed reasonable to give access to somebody who showed up later claiming to have the same name, email address, and spouse-meeting place.

Still, we shouldn’t let Yahoo off the hook completely. Millions of Yahoo customers who are not security experts (or are security experts but want to delegate security decisions to someone else) entrusted the security of their email accounts to Yahoo on the assumption that Yahoo would provide reasonable security. Palin probably made this assumption, and Yahoo let her down.

If there’s a silver lining in this ugly incident, it is the possibility that Yahoo and other sites will rethink their password recovery mechanisms, and that users will think more carefully about the risk of email breaches.

Cheap CAPTCHA Solving Changes the Security Game

ZDNet’s “Zero Day” blog has an interesting post on the gray-market economy in solving CAPTCHAs.

CAPTCHAs are those online tests that ask you to type in a sequence of characters from a hard-to-read image. By doing this, you prove that you’re a real person and not an automated bot – the assumption being that bots cannot decipher the CAPTCHA images reliably. The goal of CAPTCHAs is to raise the price of access to a resource, by requiring a small quantum of human attention, in the hope that legitimate human users will be willing to expend a little attention but spammers, password guessers, and other unwanted users will not.

It’s no surprise, then, that a gray market in CAPTCHA-solving has developed, and that that market uses technology to deliver CAPTCHAs efficiently to low-wage workers who solve many CAPTCHAs per hour. It’s no surprise, either, that there is vigorous competition between CAPTCHA-solving firms in India and elsewhere. The going rate, for high-volume buyers, seems to be about $0.002 per CAPTCHA solved.

I would happily pay that rate to have somebody else solve the CAPTCHAs I encounter. I see two or three CAPTCHAs a week, so this would cost me about twenty-five cents a year. I assume most of you, and most people in the developed world, would happily pay that much to never see CAPTCHAs. There’s an obvious business opportunity here, to provide a browser plugin that recognizes CAPTCHAs and outsources them to low-wage solvers – if some entrepreneur can overcome transaction costs and any legal issues.

Of course, the fact that CAPTCHAs can be solved for a small fee, and even that most users are willing to pay that fee, does not make CAPTCHAs useless. They still do raise the cost of spamming and other undesired behavior. The key question is whether imposing a $0.002 fee on certain kinds of accesses deters enough bad behavior. That’s an empirical question that is answerable in principle. We might not have the data to answer it in practice, at least not yet.

Another interesting question is whether it’s good public policy to try to stop CAPTCHA-solving services. It’s not clear whether governments can actually hinder CAPTCHA-solving services enough to raise the price (or risk) of using them. But even assuming that governments can raise the price of CAPTCHA-solving, the price increase will deter some bad behavior but will also prevent some beneficial transactions such as outsourcing by legitimate customers. Whether the bad behavior deterred outweighs the good behavior deterred is another empirical question we probably can’t answer yet.

On the first question – the impact of cheap CAPTCHA-solving – we’re starting a real-world experiment, like it or not.

What's the Cyber in Cyber-Security?

Recently Barack Obama gave a speech on security, focusing on nuclear, biological, and infotech threats. It was a good, thoughtful speech, but I couldn’t help noticing how, in his discussion of the infotech threats, he promised to appoint a “National Cyber Advisor” to give the president advice about infotech threats. It’s now becoming standard Washington parlance to say “cyber” as a shorthand for what many of us would call “information security.” I won’t fault Obama for using the terminology spoken by the usual Washington experts. Still, it’s interesting to consider how Washington has developed its own terminology, and what that terminology reveals about the inside-the-beltway view of the information security problem.

The word “cyber” has interesting roots. It started with an old Greek word meaning (roughly) one who guides a boat, such as a pilot or rudder operator. Plato adapted this word to mean something like “governance”, on the basis that governing was like steering society. Already in ancient Greece, the term had taken on connotations of central government control.

Fast-forward to the twentieth century. Norbert Wiener foresaw the rise of sophisticated robots, and realized that a robot would need something like a brain to control its mechanisms, as your brain controls your body. Wiener predicted correctly that this kind of controller would be difficult to design and build, so he sought a word to describe the study of these “intelligent” controllers. Not finding a suitable word in English, he reached back to the old Greek word, which he transliterated into English as “cybernetics”. Notice the connection Wiener drew between governance and technological control.

Enter William Gibson. In his early novels about the electronic future, he wanted a term for the “space” where online interactions happen. Failing to find a suitable word, he coined one – cyberspace – by borrowing “cyber” from Wiener. Gibson’s 1984 novel Neuromancer popularized the term. Many of the Net’s early adopters were fans of Gibson’s work, so cyberspace became a standard name for the place you went when you were on the Net.

The odd thing about this usage is that the Internet lacks the kind of central control system that is the subject matter of cybernetics. Gibson knew this – his vision of the Net was decentralized and chaotic – be he liked the term anyway.

All I knew about the word “cyberspace” when I coined it, was that it seemed like an effective buzzword. It seemed evocative and essentially meaningless. It was suggestive of something, but had no real semantic meaning, even for me, as I saw it emerge on the page.

Indeed, the term proved just as evocative for others as it was for Gibson, and it stuck.

As the Net grew, it was widely seen as ungovernable – which many people liked. John Perry Barlow’s “Declaration of Independence of Cyberspace” famously declared that governments have no place in cyberspace. Barlow notwithstanding, government did show up in cyberspace, but it has never come close to the kind of cybernetic control Wiener envisioned.

Meanwhile, the government’s security experts settled on a term, “information security”, or “infosec” for short, to describe the problem of securing information and digital systems. The term is widely used outside of government (along with similar terms “computer security” and “network security”) – the course I teach at Princeton on this topic is called “information security”, and many companies have Chief Information Security Officers to manage their security exposure.

So how did this term “cybersecurity” get mindshare, when we already had a useful term for the same thing? I’m not sure – give me your theories in the comments – but I wouldn’t be surprised if it reflects a military influence on government thinking. As both military and civilian organizations became wedded to digital technology, the military started preparing to defend certain national interests in an online setting. Military thinking on this topic naturally followed the modes of thought used for conventional warfare. Military units conduct reconnaissance; they maneuver over terrain; they use weapons where necessary. This mindset wants to think of security as defending some kind of terrain – and the terrain can only be cyberspace. If you’re defending cyberspace, you must be doing something called cybersecurity. Over time, “cybersecurity” somehow became “cyber security” and then just “cyber”.

Listening to Washington discussions about “cyber”, we often hear strategies designed to exert control or put government in a role of controlling, or at least steering, the evolution of technology. In this community, at least, the meaning of “cyber” has come full circle, back to Wiener’s vision of technocratic control, and Plato’s vision of government steering the ship.

Transit Card Maker Sues Dutch University to Block Paper

NXP, which makes the Mifare transit cards used in several countries, has sued Radboud University Nijmegen (in the Netherlands), to block publication of a research paper, “A Practical Attack on the MIFARE Classic,” that is scheduled for publication at the ESORICS security conference in October. The new paper reportedly shows fatal security flaws in NXP’s Mifare Classic, which appears to be the world’s most commonly used contactless smartcard.

I wrote back in January about the flaws found by previous studies of Mifare. After the previous studies, there wasn’t much left to attack in Mifare Classic. The new paper, if its claims are correct, shows that it’s fairly easy to defeat MIFARE Classic completely.

It’s not clear what legal argument NXP is giving for trying to suppress the paper. There was a court hearing last week in Arnheim, but I haven’t seen any reports in the English-language press. Perhaps a Dutch-speaking reader can fill in more details. An NXP spokesman has called the paper “irresponsible” but that assertion is hardly a legal justification for censoring the paper.

Predictably, a document purporting to be the censored paper showed up on Wikileaks, and BoingBoing linked to it. Then, for some reason, it disappeared from Wikileaks, though BoingBoing commenters quickly pointed out that it was still available in Google’s cache of Wikileaks, and also at Cryptome. But why go to a leak-site? The same article has been available on the Web all along at arxiv, a popular repository of sci/tech research preprints run by the Cornell University library.

[UPDATE (July 15): It appears that Wikileaks had the wrong paper, though one that came from the same Radboud group. The censored paper is called “Dismantling Mifare Classic”.]

As usual in these cases of censorship-by-lawsuit, it’s hard to see what NXP is trying to achieve with the suit. The research is already done and peer-reviewed,. The suit will only broaden the paper’s readership. NXP’s approach will alienate the research community. The previous Radboud paper already criticizes NXP’s approach, in a paragraph written before the lawsuit:

We would like to stress that we notified NXP of our findings before publishing our results. Moreover, we gave them the opportunity to discuss with us how to publish our results without damaging their (and their customers) immediate interests. They did not take advantage of this offer.

What is really puzzling here is that the paper is not a huge advance over what has already been published. People following the literature on Mifare Classic – a larger group, thanks to the lawsuit – already know that the system is unsound. Had NXP reacted responsibly to this previous work, admitting the Mifare Classic problems and getting to work on migrating customers to newer, more secure products, none of this would have been necessary.

You’ve got to wonder what NXP was thinking. The lawsuit is almost certain to backfire: it will only boost the audience of the censored paper and of other papers criticizing Mifare Classic. Perhaps some executive got angry and wanted to sue the university out of spite. Things can’t be comfortable in the executive suite: NXP’s failure to get in front of the Mifare Classic problems will (rightly) erode customers’ trust in the company and its products.

UPDATE (July 18): The court ruled against NXP, so the researchers are free to publish. See Mrten’s comment below.

NJ Election Day: Voting Machine Status

Today is primary election day in New Jersey, for all races except U.S. President. (The presidential primary was Feb. 5.) Here’s a roundup of the voting-machine-related issues.

First, Union County found that Sequoia voting machines had difficulty reporting results for a candidate named Carlos Cedeño, reportedly because it couldn’t handle the n-with-tilde character in his last name. According to the Star-Ledger, Sequoia says that election results will be correct but there will be some kind of omission on the result tape printed by the voting machine.

Second, the voting machines in my polling place are fitted with a clear-plastic shield over the operator panel, which only allows certain buttons on the panel to be pressed. Recall that some Sequoia machines reported discrepancies in the presidential primary on Feb. 5, and Sequoia said that these happened when poll workers accidentally pressed buttons on the operator panel that were supposed to be unused. This could only have been caused by a design problem in the machines, which probably was in the software. To my knowledge, Sequoia hasn’t fixed the design problem (nor have they offered an explanation that is consistent with all of the evidence – but that’s another story), so there was likely an ongoing risk of trouble in today’s election. The plastic shield looks like a kludgy but probably workable temporary fix.

Third, voting machines were left unguarded all over Princeton, as usual. On Sunday and Monday evenings, I visited five polling places in Princeton and found unguarded voting machines in all of them – 18 machines in all. The machines were sitting in school cafeteria/gyms, entry hallways, and even in a loading dock area. In no case were there any locks or barriers stopping people from entering and walking right up to the machines. In no case did I see any other people. (This was in the evening, roughly between 8:00 and 9:00 PM). There were even handy signs posted on the street pointing the way to the polling place, showing which door to enter, and so on.

Here are some photos of unguarded voting machines, taken on Sunday and Monday: