November 21, 2024

Archives for July 2007

California Study: Voting Machines Vulnerable; Worse to Come?

A major study of three e-voting systems, commissioned by the California Secretary of State’s office, reported Friday that all three had multiple serious vulnerabilities.

The study examined systems from Diebold, Hart InterCivic, and Sequoia; each system included a touch-screen machine, an optical-scan machine, and the associated backend control and tabulation machine. Each system was studied by three teams: a “red team” did a hands-on study of the machines, a “source code team” examined the software source code for the system, and a “documentation team” examined documents associated with the system and its certification. (An additional team studied the accessibility of the three systems – an important topic but beyond the scope of this post.)

(I did not participate in the study. An early press release from the state listed me as a participant but that was premature. I ultimately had to withdraw before the study began, due to a scheduling issue.)

So far only the red team (and accessibility) reports have been released, which makes one wonder what is in the remaining reports. Here are the reports so far:

The bottom-line paragraph from the red team overview says this (section 6.4):

The red teams demonstrated that the security mechanisms provided for all systems analyzed were inadequate to ensure accuracy and integrity of the election results and of the systems that provide those results.

The red teams all reported having inadequate time to fully plumb the systems’ vulnerabilities (section 4.0):

The short time allocated to this study has several implications. The key one is that the results presented in this study should be seen as a “lower bound”; all team members felt that they lacked sufficient time to conduct a thorough examination, and consequently may have missed other serious vulnerabilities. In particular, Abbott’s team [which studied the Diebold and Hart systems] reported that it believed it was close to finding several other problems, but stopped in order to prepare and deliver the required reports on time. These unexplored avenues are presented in the reports, so that others may pursue them. Vigna’s and Kemmerer’s team [which studied the Sequoia system] also reported that they were confident further testing would reveal additional security issues.

Despite the limited time, the teams found ways to breach the physical security of all three systems using only “ordinary objects” (presumably paper clips, coins, pencil erasers, and the like); they found ways to modify or overwrite the basic control software in all three voting machines; and they were able to penetrate the backend tabulator system and manipulate election records.

The source code and documentation studies have not yet been released. To my knowledge, the state has not given a reason for the delay in releasing these reports.

The California Secretary of State reportedly has until Friday to decide whether to allow these systems to be used in the state’s February 2008 primary election.

[UPDATE: A public hearing on the study is being webcast live at 10:00 AM Pacific today.]

DRM for Chargers: Possibly Good for Users

Apple has filed a patent application on a technology for tethering rechargeable devices (like iPods) to particular chargers. The idea is that the device will only allow its batteries to be recharged if it is connected to an authorized charger.

Whether this is good for consumers depends on how a device comes to be authorized. If “authorized” just means “sold or licensed by Apple” then consumers won’t benefit – the only effect will be to give Apple control of the aftermarket for replacement chargers.

But if the iPod’s owner decides which chargers are authorized, then this might be a useful anti-theft measure – there’s little point in stealing an iPod if you won’t be able to recharge it.

How might this work? One possibility is that when the device is plugged in to a charger it hasn’t seen before, it makes a noise and prompts the user to enter a password on the iPod’s screen. If the correct password is entered, the device will allow itself to be recharged by that charger in the future. The device will become associated with a group of chargers over time.

Another possibility, mentioned in the patent, is that there could be a central registry of stolen iPods. When you synched your iPod with your computer, the computer would get a digitally signed statement from the registry, saying that your iPod was not listed as stolen. The computer would pass that signed statement on to the iPod. If the iPod went too long without seeing such a statement, it would demand that the user do a synch, or enter a password, before it would allow itself to be recharged.

How can we tell whether a DRM scheme like this is good for users. One sure-fire test is whether the user has the option of turning the scheme off. You don’t want a thief to be able to disable the scheme on a stolen iPod, but it’s safe to let the user disable the anti-theft feature the first time she syncs her new iPod, or later by entering a password.

We don’t know yet whether Apple will do this. But reading the patent, it looks to me like Apple has thought carefully about the legitimate anti-theft uses of this technology. That’s a good sign.

Why No Phoneless iPhone?

I know the iPhone is like so last week, but I want to ask one more question about it: why does Apple insist on users registering for an AT&T account? Officially at least, you have to agree to a two-year contract with AT&T cellular before you can activate your iPhone, even if you will never use it as a phone. (There are ways around this, but Apple seems to wish they didn’t exist.) Which is a shame, because the iPhone is a pretty nice WiFi-enabled portable computer that (to me at least) is less attractive because it’s tied to a two-year AT&T contract.

Of course AT&T is giving Apple a cut of its revenue from the contract. According to rumor, the cut is $3 per month for each user, or $11 per month for users who switch to AT&T from other carriers. Given that about half of iPhone customers switch from other carriers, that’s an average of $7 per month per user, or about $170 total over two years.

But that $170 doesn’t answer the question, because Apple could still sell a phoneless iPhone for, say, $800 while the AT&T iPhone costs $600. If you think Apple still comes out behind at $800, then feel free to pick a larger number. There must be some price point at which Apple is happy to sell a phoneless iPhone, right?

I can see only two reasons why it might be rational for Apple to refuse to offer such a product. It can’t be difficult technically – all they have to do is change the activation procedure so that it doesn’t require the user to sign up for an AT&T contract. But there are two possible reasons.

The first is that the market for a phoneless iPhone is too small, at the price point that they would have to offer. If hardly anyone would buy the device at $800, then it might not be worth the trouble to create another option in the product line. This seems unlikely.

The other reason – the only other possible reason, as far as I can tell – is that the mere existence of a phoneless iPhone makes the original iPhone much less attractive to customers, and that this effect is big enough to offset the extra revenue Apple could get by charging an even bigger premium for the phoneless version.

Why might this be? Maybe Apple thinks the iPhone will look less attractive if the value of the contract lock-in (and hence the cost of lock-in to the customer) is made obvious. Or maybe Apple wants to differentiate the iPhone from other phone handsets by making it the only handset that isn’t obviously subsidized by a carrier. Or maybe Apple is keeping a space in its product line open for a future product introduction. Apple and Steve Jobs are clever enough about these things that there must be some good reason.

Exploiting Online Games

Exploiting Online Games, a book by Gary McGraw and Greg Hoglund, is being released today. The book talks concretely about security problems and attacks on online games. This is a fascinating laboratory for exploring security issues.

I wrote the book’s foreword. Here it is:

It’s wise to learn from your mistakes. It’s wiser still to learn from the mistakes of others. Too often, we in the security community fail to learn from mistakes because we refuse to talk about them or we pretend they don’t exist.

This book talks frankly about game companies’ mistakes and their consequences. For game companies, this is an opportunity to learn from their own mistakes and those of their peers. For the rest of us, it’s an opportunity to learn what can go wrong so we can do better.

The debate over full disclosure goes back a long way, so there is no need to repeat the ethical and legal arguments we have all heard before. For most of us in the security community, the issue is simple: Experts and the general public both benefit from learning about the technologies that they depend on.

In today’s world, we are asked all the time to bet our money, our time, our private information, and sometimes our lives on the correct functioning of technologies. Making good choices is difficult; we need all the help we can get.

In some fields, such as aviation security, we can be confident that problems will be identified and addressed. Nobody would tolerate an aircraft vendor hiding the cause of a crash or impeding an investigation. Nor would we tolerate a company misleading the public about safety or claiming there were no problems when it knew otherwise. This atmosphere of disclosure, investigation, and remediation is what makes air travel so safe.

In game design, the stakes may not be as high, but the issues are similar. As with aviation, the vendors have a financial stake in the system’s performance, but others have a lot at stake, too. A successful game – especially a virtual world like World of Warcraft – generates its own economy, in several senses. Objects in the game have real financial value, and a growing number of people make their living entirely or partially via in-game transactions. In-world currency trades against the dollar. Economists argue about the exact GDP of virtual worlds, but by any meaningful definition, virtual economies are just as “real” as the NASDAQ stock exchange.

Even nonplayers can have a lot at stake: the investor who bets his retirement account on a game company, the programmer who leaves a good job to work on a game, the family that owns the Indian restaurant across the street from the game company’s headquarters. These people care deeply about whether the technology is sound. And would-be customers, before plunking down their hard-earned money for game software or a monthly subscription, want to know how well a game will stand up to attack.

If aviation shows us the benefits of openness, e-voting illustrates the harms caused by secrecy. We, the users of e-voting systems – citizens, that is – aren’t allowed to know how the machines work. We know the machines are certified, but the certification process is itself shrouded in mystery. We’re told that the details aren’t really our concern. And the consequences are obvious: Designs are weak, problems go unfixed for years, and progress is slow. Even when things do go wrong in the field, it’s very hard to get a vigorous investigation.

The virtue of this book is not only that it talks about real-world problems but also that it provides details. Some security problems exist only in theory but evaporate when real systems are built. Some problems look serious but turn out not to be a big deal in practice. And some problems are much worse than they look on paper. To tell the difference, we need to dig into the details. We need to see precisely how an attack would work and what barriers the attacker has to get over. This book, especially the later chapters, offers the necessary detail.

Because it touches on the popular, hot topic of massively multiplayer games, and because it offers both high-level and detailed views of game security, this book is also a great resource for students who want to learn how security really works. Theory is a valuable tool, but it does its best work when wielded by people with hands-on experience. I started out in this field as a practitioner, trying to learn how to get things done and how real systems behaved, before expanding my horizon to include formal computer science training. I suspect that many senior figures in the field would say the same. When I started out, books like this didn’t exist (or if they did, I didn’t know about them). Today’s students are luckier.

Perhaps some vendors will be unhappy about this book. Perhaps they will try to blame the authors for the insecurity of their game software. Don’t be fooled. If we’re going to improve our security practices, frank discussions like the ones in this book are the only way forward. Or as the authors of this book might say, when you’re facing off against Heinous Demons of Insecurity, you need experienced companies, not to mention a Vorpal Sword of Security Knowledge.

We all make mistakes. Let’s learn from our mistakes and the mistakes of others. That’s our best hope if we want to do better next time.

Why Did Universal Threaten to Pull Out of iTunes?

Last week brought news that Universal Music, the world’s largest record company, was threatening to pull its music from Apple’s iTunes Music Store. Why would Universal do this?

The obvious answer is that the companies are renegotiating their contract and Universal wants to get the best deal they can. Threatening to walk is one way to pressure Apple.

But where digital music is concerned, there is no such thing as a simple negotiation anymore. For one thing, negotiations like this have political ramifications. The major record companies have managed, remarkably, to convince policymakers that protecting their profits should be a goal of public policy; so now any deal that affects the majors’ bottom lines must affect the policy process.

(As I’ve written before, copyright policy should be trying to foster the creation and distribution of varied, high-quality music – which is not the same as trying to ensure anyone’s profits.)

The political implications of Universal’s threat are pretty interesting. For years the major record companies have been arguing that the Internet is hurting them and that policymakers should therefore intervene to protect the majors’ business. iTunes’ success has supplied the major counterargument, suggesting that it’s possible to sell lots of music online.

Walking away from iTunes would cause a big political problem for Universal. How could Universal keep asking government to prop up its online business, when it was walking away from the biggest and most lucrative distribution channel for digital music?

And it’s not just Universal whose political pull would diminish. The other majors would suffer as well; so to the extent that the majors act as a cartel, there would have to be pressure on Universal not to pull out of iTunes.

Most likely, Universal was just bluffing and had no real plan to cut its iTunes ties. If this was a bluff, then it was most likely Apple who leaked the story, as a way of raising the stakes. Its bluff having failed, Universal is stuck doing business on Apple’s terms.

One can’t help wondering what the world would be like had the majors moved early and aggressively to build an online business that customers liked. Having failed to do so, they seem doomed to be followers rather than leaders.