December 22, 2024

Archives for February 2006

Analog Hole Bill Requires "Open and Public" Discussion of Secret Technology

Today I want to return to the Sensenbrenner-Conyers analog hole bill, which would impose a secret law – a requirement that all devices that accept analog video inputs must implement a secret technical specification for something called a VEIL detector. If you want to see this specification, you have to pay a $10,000 fee to a private company and you have to promise not to tell anyone about the technology. It’s pretty disturbing that our representatives would propose this kind of secret law.

But what is really odd about the secret technology is that the bill itself seems to assume that it is not secret. Consider, for example, Section 105:

If, upon the petition of any interested party, the Director of the Patent and Trademark Office determines that [VEIL] has become materially ineffective in a way that cannot be adequately remedied by existing technical flexibility in the embedding functions of [VEIL], then the Director may by rule adopt commercially reasonable improvements to the detection function of [VEIL] in order to maintain the functionality of the rights signaling system under this Act. Any such improvements shall be limited to adjustments or upgrades solely to the same underlying VEIL technology …

In [the above-described rulemaking], the Director … shall encourage representatives of the film industry, the broadcast, cable, and satellite industry, the information technology industry, and the consumer electronics industry to negotiate in good faith in an effort to reach agreement on the … improvements to [VEIL] to be adopted in the rule. The Director shall ensure that such negotiation process is open and public and that all potentially affected parties are invited to participate in the process through public notice. The Director shall cause any agreement for which there is substantial consensus of the parties on all material points to be published and shall take such agreement into account in any final rule adopted.

This process cannot be “open and public”, and an agreement on how the VEIL technology should be changed cannot be published, if the VEIL technology is secret. You can’t have a negotiation about how VEIL might be fixed, if the parties to that negotiation have promised not to disclose how VEIL works. And you can’t meaningfully invite members of the public to participate in the negotiation if they aren’t allowed to know about the subject being negotiated.

But that’s not all. The rulemaking will happen if somebody files a petition that convinces the Patent Office that VEIL “has become materially ineffective in a way that cannot be adequately remedied by existing technical flexibility in the embedding function” of VEIL.

The embedding function of VEIL is the gizmo that puts VEIL watermarks into video that is going to be distributed. It is separate from the detection function, which detects the presence or absence of a VEIL watermark in video content. The bill mandates that all analog video devices must include the detection function, so it is the detection function that one could learn about by paying the fee and taking the secrecy pledge.

But the embedding function of VEIL is entirely secret, and is not being revealed even to people who pay the fee and take the pledge. As far as I know, there is no way at all for anyone other than the VEIL company to find out how the embedding function works, or what kind of “existing technical flexibility” it might have. How anyone could petition the Patent Office on that subject is a mystery.

In short, the rulemaking procedure in Section 105 is entirely inconsistent with the secrecy of VEIL. How it got into the bill is therefore a pretty interesting question. Reading the bill, one gets the impression that it was assembled from prefab parts, rather than reflecting a self-consistent vision of how a technology mandate might actually work.

AOL, Yahoo Challenge Email Neutrality

AOL and Yahoo will soon start using Goodmail, a system that lets bulk email senders bypass the companies’ spam filters by paying the companies one-fourth of a cent per message, and promising not to send unsolicited messages, according to a New York Times story by Saul Hansell.

Pay-to-send systems are one standard response to spam. The idea is that raising the cost of sending a message will deter the kind of shot-in-the-dark spamming that sends a pitch to everybody in the hope that somebody, somewhere, will respond. The price should be high enough to deter spamming but low enough that legitimate email won’t be deterred. Or so the theory goes.

What’s different here is that senders aren’t paying for delivery, but for an exemption from the email providers’ spam filters. As Eric Rescorla notes, this system creates interesting incentives for the providers. For instance, the providers will have an incentive to make their spam filters overly stringent – so that legitimate messages will be misclassified as spam, and senders will be more likely to pay for an exemption from the filters.

There’s an interesting similarity here to the network neutrality debate. Net neutrality advocates worry that residential ISPs will discriminate against some network traffic so that they can charge web sites and services a fee in exchange for not discriminating against their traffic. In the email case, the worry is that email providers will discriminate against commercial email, so that they can charge email senders a fee in exchange for not discriminating against their messages.

Is this really the same policy problem? If you advocate neutrality regulations on ISPs, does consistency require you to advocate neutrality regulations on email providers? Considering these questions may shed a little light on both issues.

My tentative reaction to the email case is that this may or may not be a smart move by AOL and Yahoo, but they ought to be free to try it. If customers get fewer of the commercial email messages they want (and don’t get enough reduction in spam to make up for it), they’ll be less happy with AOL and Yahoo, and some will take their business elsewhere. The key point, I think, is that customers have realistic alternatives they can switch to. Competition will protect them.

(You may object that switching email providers is costly for a customer who has been using an aol.com or yahoo.com email address – if he switches email providers, his old email address might not work any more. True enough, but a rational email provider will already be exploiting this lock-in, perhaps by charging the customer a slightly higher fee than he would pay elsewhere.)

Competition is a key issue – perhaps the most important one – in the net neutrality debate too. If commercial ISPs face real competition, so that users have realistic alternatives to an ISP who misbehaves, then ISPs will have to heed their customers’ demand for neutral access to sites and services. But if ISPs have monopoly power, their incentives may drive them to behave badly.

To me, the net neutrality issue hinges largely on whether the residential ISP market will be competitive. I can’t make a clear prediction, but I know that there are people who probably can. I’d love to hear what they have to say.

What does seem clear is that regulatory policy can help or hinder the emergence of competition. Enabling competition should be a primary goal of our future telecom regulation.

Report: Many Apps Misconfigure Security Settings

My fellow Princeton computer scientists Sudhakar Govindavajhala and Andrew Appel released an eye-opening report this week on access control problems in several popular applications.

In the old days, operating systems had simple access control mechanisms. In Unix, each file belonged to an owner and a (single) group of users. The owner had the option to give the other group members read and/or write permission, and the option to give everybody read and/or write permission. That was pretty much it.

Over time, things have gotten more complicated. Windows controls access to about fifteen types of objects, with about thirty different flavors of privileges that can each be granted or denied, for any object, to any user or group of users. Privileges can be managed with great precision. In theory, this lets people grant others the absolute minimum privileges they need to do their jobs, which is good security practice.

The downside of this complexity is that if the system is hard to understand, people will make mistakes. End users will surely make mistakes. But you might think that big software companies can manage this complexity and will get the security settings on their products right.

Which brings us to Sudhakar and Andrew’s research. They built an automated tool to analyze the access control settings on files, registry entries, and other objects on a Windows machine. The tool looks at the settings on the machine and applies a set of inference rules that encode the various ways a user could try to leverage his privileges improperly. For example, one rule says that if Alice has the privilege to modify a program, and Bob runs that program, then Alice can use any of Bob’s privileges. (She can do this by adding code to the program that does what she wants; when Bob runs the program, that code will run with Bob’s privileges.) The tool looks for privilege escalation attacks, or ways for a relatively unprivileged user to gain more privilege.

Sudhakar and Andrew ran the tool on professionally-managed Windows systems, and the results were sobering. Several popular applications, from companies like Adobe, AOL, Macromedia, and Microsoft, had misconfigured their access control in ways that allowed relatively unprivileged users – in some cases even the lowliest Guest account – to gain full control of the system.

Sudhakar and Andrew notified the affected vendors well before publishing the paper, and some of the problems they found have been patched. But some problems remain, and testing on new systems tends to find still more problems.

There are two lessons here. First, complicated security mechanisms lead to mistakes, even among relatively sophisticated software developers and companies, so the desire to control privileges precisely must be tempered by the virtue of simplicity. Second, if you’re going to have a complicated system, you probably need tools to help you figure out whether you’re using it safely.

Paper Naming Contest

So our Sony CD DRM paper is virtually done, except for one thing: the title. We hope you can help us out.

We’re looking for a phrase from a song lyric, song title, or album title that is distinctive and can be read as a pithy comment on the whole Sony CD DRM incident. It should be recognizable and non-inflammatory. The title of the paper will be “ThePhrase: Lessons from the Sony CD DRM Episode”.

Please offer suggestions in the comments. Thanks!

[Update (Feb. 9): Let me emphasize that we’re looking for phrases that would be recognized instantly by most readers. We don’t want to have to explain where the title came from. (If we were wanted a phrase that we had to explain, the clear winner would be “Invisible Invasion”, which is the title of the only album we know of that was issued under both of the Sony DRM technologies.)]

What's in the Secret VEIL Test Results?

I wrote last week about how the analog hole bill would mandate use of the secret VEIL technology. Because the law would require compliance with the VEIL specification, that spec would effectively be part of the law. Call me old-fashioned, but I think there’s something wrong when Congress is considering a secret bill that would impose a secret law. We’re talking about television here, not national security.

Monday’s National Journal Tech Daily had a story (subscribers only; sorry) by Sarah Lai Stirland about the controversy, in which VEIL executive Scott Miller said “the company is willing to provide an executive summary of test results of the system to anyone who wants them.”

Let’s take a look at that test summary. The first thing you’ll notice is how scanty the document is. This is all the testing they did to validate the technology?

The second thing you’ll notice is that the results don’t look very good for VEIL. For example, when they tested to see whether VEIL caused a visible difference in the video image, they found that viewers did report a difference 29% of the time (page 4).

More interesting, perhaps, are the results on removability of the VEIL watermark (page 2). They performed ten unspecified transformations on the video signal and measured how often each transformation made the VEIL watermark undetectable. Results were mixed, ranging from 0% success in removing the watermark up to 58%. What they don’t tell us is what the transformations (which they call “impairments”) were. So all we can conclude is that at least one of the transformations they chose for their test can remove the VEIL watermark most of the time. And if you have any experience at all in the industry, you know that vendor-funded “independent” studies like this tend to pick easy test cases. You have to wonder what would have happened if they had chosen more aggressive transformations to try. And notice that they’re refusing to tell us what the transformations were – another hint that the tests weren’t very strenuous.

The VEIL people have more information about all of these tests, but they are withholding the full testing report from us, even while urging our representatives to subject us to the VEIL technology permanently.

Which suggests an obvious question: What is in the secret portion of the VEIL testing results? What are they hiding?