November 23, 2024

Archives for 2004

WSJ Political Diary on INDUCE Act

Yesterday’s “Political Diary” at the Wall Street Journal’s online OpinionJournal had a nice little piece on Sen. Hatch’s IICA (a.k.a. INDUCE Act). (Access to subscribers only, unfortunately.)

The piece, written by David Robinson, notes that Sen. Hatch, who had previously urged vigorous action against music downloaders, even suggesting “destroying their machines,” has now changed his tune.

Now he’s returned to the issue, this time with a different message: Young downloaders are not crooks, but victims. They have been “tragically” manipulated, he explained on the floor of the Senate, by adults who “exploit the innocence of children.”

The IICA doesn’t seem to be the solution:

Mr. Hatch may have a point – software businesses like Grokster and others do seem to be engaged in trying to profit from their customers’ urge to commit piracy. But his solution seems likely to open a Pandora’s box of frivolous lawsuits, ranging far beyond music downloads. As much as we enjoy Mr. Hatch’s magic similes, “back to the drawing board” would be our advice.

Fancy DRM For Academy Screeners?

Movie studios are considering an elaborate DRM scheme to limit copying of promotional “screener” videos distributed to Academy Award voters, according to an AP story by Gary Gentile.

The article’s description of the scheme is a bit confusing, but I think I can reconstruct how it works. The studios would distribute a special new DVD player to each person receiving videos. Each copy of a video would be encrypted so that only a particular person’s DVD player could decrypt it. The videos would also contain some kind of watermark to identify each individual copy.

The technology vendor, Cinea, makes a carefully calibrated technical claim:

Cinea executives said that with enough time and money, a hacker could eventually circumvent the encryption technology hardwired in a single DVD player, but the watermarking will help authorities track down that player.

The discs, by themselves, cannot be hacked, [a Cinea executive] said.

Assuming that this claim is correct, the discs must not be using the lame CSS encryption scheme used by normal DVDs. (CSS is so weak that encryption keys can be recovered easily from a single encrypted disc.) If the designers are smart, they’re using a standard encryption method, in which case it’s probably true that a single disc is not enough to recover the encrypted plaintext. Of course, it’s easy to access the video given a disc and a player – that’s the whole point of having a player.

It’s not clear how sophisticated the watermark would be. Last year, a simple, weak watermark was sufficient to catch a guy who distributed copies of Academy screener videos on the net.

All of this expensive technology might be enough to keep screener videos from leaking onto the net. But this kind of technology won’t work for consumer DVDs. Tethering each disc to a single player would cause major headaches for consumers – imagine having to buy all new discs whenever you bought a new player.

Worse yet, anybody could capture and redistribute the analog output of one of these players. Even if the watermark scheme isn’t broken (and it probably would be, if it mattered), the best the watermark can do is to trace the redistributed copy back to a particular player device. If that device was stolen, or transported to an outlaw region, there is no plausible way to catch the actual perpetrator. This might not be a problem for a modest number of devices, used for a short period by known people, as in the case of screeners; but it would be a fatal flaw on devices that are distributed widely to ordinary people.

UPDATE (July 7): Ernest Miller has some interesting comments on this issue.

Monoculture Debate: Geer vs. Charney

Yesterday the USENIX Conference featured a debate between Dan Geer and Scott Charney about whether operating-system monoculture is a threat to computer security. (Dan Geer is a prominent security expert who co-wrote last year’s CCIA report on the monoculture program, and was famously fired by @Stake for doing so. Scott Charney was previously a cybercrime prosecutor, and is now Microsoft’s Chief Security Strategist.)

Geer went first, making his case for the dangers of monoculture. He relied heavily on an analogy to biology, arguing that just as genetic diversity helps a population resist predators and epidemics, diversity in operating systems would help the population of computers resist security attacks. The bio metaphor has some power, but I thought Geer relied on it too heavily, and that he would have been better off talking more about computers.

Charney went second, and he made two main arguments. First, he said that we already have more diversity than most people think, even within the world of Windows. Second, he said that the remedy that Geer suggests – adding a modest level of additional diversity, say adopting two major PC operating systems with a 50/50 market share split – would do little good. The bad guys would just learn how to carry out cross-platform attacks; or perhaps they wouldn’t even bother with that, since an attack can take the whole network offline without penetrating a large fraction of machines. (For example, the Slammer attack caused great dislocation despite affecting less than 0.2% of machines on the net.) The bottom line, Charney said, is that increasing diversity would be very expensive but would provide little benefit.

A Q&A session followed, in which the principals clarified their positions but no major points were scored. Closing statements recapped the main arguments.

The moderator, Avi Rubin, polled the audience both before and after the debate, asking how many people agreed with each party’s position. For this pupose, Avi asked both Geer and Charney to state their positions in a single sentence. Geer’s position was that monoculture is a danger to security. Charney’s position was that the remedy suggested by Geer and his allies would do little if anything to make us more secure.

Pre-debate, most people raised their hands to agree with Geer, and only a few hands went up for Charney. Post-debate, Geer got fewer hands than before and Charney got more; but Geer still had a very clear majority.

I would attribute the shift in views to two factors. First, though Geer is very eloquent for a computer scientist, Charney, as an ex-prosecutor, is more skilled at this kind of formalized debate. Second, the audience was more familiar with Geer’s arguments beforehand, while some may have been hearing Charney’s arguments for the first time; so Charney’s arguments had more impact.

Although I learned some things from the debate, my overall position didn’t change. I raised my hand for both propositions, both pre- and post-debate. Geer is right that monoculture raises security dangers. Charney is also right that the critics of monoculture don’t offer compelling remedies.

This is not to say that the current level of concentration in the OS market is optimal from a security standpoint. There is no doubt that we would be more secure if our systems were more diverse. The most important step toward diversity would be to ensure true competition in software markets. Consumers have an incentive to switch to less-prevalent technologies in order to avoid being attacked. (See, e.g., Paul Boutin’s endorsement in Slate of the Mozilla Firefox browser.) In a properly functioning market, I suspect that the diversity problem would take care of itself.

(See also my previous discussion of the monoculture issue.)

USENIX Panel

Today I’ll be speaking on a panel at the USENIX Conference in Boston, on “The Politicization of [Computer] Security.” The panel is 10:30-noon, Eastern time. The other panelists are Jeff Grove (ACM), Gary McGraw (Cigital), and Avi Rubin (Johns Hopkins).

If you’re attending the panel, feel free to provide real-time narration/feedback/discussion in the comments section of this post. I’ll be reading the comments periodically during the panel, and I’ll encourage the other panelists to do so too.

Victims of Spam Filtering

Eric Rescorla wrote recently about three people who must have lots of trouble getting their email through spam filters: Jose Viagra, Julia Cialis, and Josh Ambien. I feel especially sorry for poor Jose, who through no fault of his own must get nothing but smirks whenever he says his name.

Anyway, this reminded me of an interesting problem with Bayesian spam filters: they’re trained by the bad guys.

[Background: A Bayesian spam filter uses human advice to learn how to recognize spam. A human classifies messages into spam and non-spam. The Bayesian filter assigns a score to each word, depending on how often that word appears in spam vs. non-spam messages. Newly arrived messages are then classified based on the scores of the words they contain. Words used mostly in spam, such as “Viagra”, get negative scores, so messages containing them tend to get classified as spam. Which is good, unless your name is Jose Viagra.]

Many spammers have taken to lacing their messages with sections of “word salad” containing meaningless strings of innocuous-looking words, in the hopes that the word salad will trigger positive associations in the recipient’s Bayesian filter.

Now suppose a big spammer wanted to poison a particular word, so that messages containing that word would be (mis)classified as spam. The spammer could sprinkle the target word throughout the word salad in his outgoing spam messages. When users classified those messages as spam, the targeted word would develop a negative score in the users’ Bayesian spam filters. Later, messages with the targeted word would likely be mistaken for spam.

This attack could even be carried out against a particular targeted user. By feeding that user a steady diet of spam (or pseudo-spam) containing the target word, a malicious person could build up a highly negative score for that word in the targeted user’s filter.

Of course, this won’t work, or will be less effective, for words that have appeared frequently in a user’s legitimate messages in the past. But it might work for a word that is about to become more frequent, such as the name of a person in the news, or a political party. For example, somebody could have tried to poison “Fahrenheit” just before Michael Moore’s movie was released, or “Whitewater” in the early days of the Clinton administration.

There is a general lesson here about the use of learning methods in security. Learning is attractive, because it can adapt to the bad guys’ behavior. But the fact that the bad guys are teaching the system how to behave can also be a serious drawback.