May 19, 2024

Archives for 2006

Secure Flight Mothballed

Secure Flight, the planned next-generation system for screening airline passengers, has been mothballed by the Transportation Security Administration, according to an AP story by Leslie Miller. TSA chief Kip Hawley cited security concerns and questions about the program’s overall direction.

Last year I served on the Secure Flight Working Group, a committee of outside technology and privacy experts asked by the TSA to give feedback on Secure Flight. After hearing about plans for Secure Flight, I was convinced that TSA didn’t have a clear idea of what the program was supposed to be doing or how it would work. This is essentially what later government studies of the program found. Here’s the AP story:

Nearly four years and $200 million after the program was put into operation, Hawley said last month that the agency hadn’t yet determined precisely how it would work.

Government auditors gave the project failing grades – twice – and rebuked its authors for secretly obtaining personal information about airline passengers.

The sad part of this is that Secure Flight seems to have started out as a simpler program that would have made sense to deploy.

Today, airlines are given a no-fly list and a watch-list, which they are asked to check against their passenger lists. There are obvious security drawbacks to distributing the lists to airlines – a malicious airline employee with access to the lists could leak them to the bad guys. The 9/11 Commission recommended keeping the lists within the government, and having the government check passengers’ names against the lists.

A program designed to do just that would have been a good idea. There would still be design issues to work out. For example, false matches are now handled by airline ticket agents, but that function would probably have to moved into the government too, which would raise some logistical issues. There would be privacy worries, but they could be handled with good design and oversight.

Instead of sticking to this more modest plan, Secure Flight became a vehicle for pie-in-the-sky plans about data mining and automatic identification of terrorists from consumer databases. As the program’s goals grew more ambitious and collided with practical design and deployment challenges, the program lost focus and seemed to have a different rationale and plan from one month to the next.

What happens now is predictable. The program will officially die but will actually be reincarnated with a new name. Congress has directed TSA to implement a program of this general type, so TSA really has no choice but to try again. Let’s hope that this time they make the hard choices they avoided last time, and end up with a simpler program that solves the easier problems first.

(Fellow Working Group member Lauren Gelman offers has a similar take on this story. Another member, Bruce Schneier, has also blogged extensively about Secure Flight.)

Quality of Service: A Quality Argument?

One of the standard arguments one hears against network neutrality rules is that network providers need to provide Quality of Service (QoS) guarantees to certain kinds of traffic, such as video. If QoS is necessary, the argument goes, and if net neutrality rules would hamper QoS by requiring all traffic to be treated the same, then net neutrality rules must be harmful. Today, I want to unpack this argument and see how it holds up in light of computer science research and engineering experience.

First, I need to make clear that guaranteeing QoS for an application means more than just giving it lots of bandwidth or prioritizing its traffic above other applications. Those things might be helpful, but they’re not QoS (or at least not the kind I’m talking about today). What QoS mechanisms (try to) do is to make specific performance guarantees to an app over a short window of time.

An example may clarify this point. If you’re loading a web page, and your network connection hiccups so that you get no traffic for (say) half a second, you may notice a short pause but it won’t be a big deal. But if you’re having a voice conversation with somebody, a half-second gap will be very annoying. Web browsing needs decent bandwidth on average, but voice conversations needs better protection against short delays. That protection is QoS.

Careful readers will protest at this point that a good browsing experience depends on more than just average bandwidth. A half-second hiccup might not be a big problem, but a ten-minute pause would be too much, even if performance is really snappy afterward. The difference between voice conversations and browsing is one of degree – voice conversations want guarantees over fractions of seconds, and browsing wants them over fractions of minutes.

The reason we don’t need special QoS mechanisms for browsing is that the broadband Internet already provides performance that is almost always steady enough over the time intervals that matter for browsing.

Sometimes, too, there are simple tricks that can turn an app that cares about short delays into one that cares only about longer delays. For example, watching prerecorded audio or video streams doesn’t need QoS, because you can use buffering. If you’re watching a video, you can download every frame ten seconds before you’re going to watch it; then a hiccup of a few seconds won’t be a problem. This is why streaming audio and video work perfectly well today (when there is enough average bandwidth).

There are two other important cases where QoS isn’t needed. First, if an app needs higher average speed than the Net can provide, than QoS won’t help it – QoS makes the Net’s speed steadier but not faster. Second – and less obvious – if an app needs much less average speed than the Net can provide, then QoS might also be unnecessary. If speed doesn’t drop entirely to zero but fluctuates, with peaks and valleys, then even the valleys may be high enough to give the app what it needs. This is starting to happen for voice conversations – Skype and other VoIP systems seem to work pretty well without any special QoS support in the network.

We can’t say that QoS is never needed, but experience does teach that it’s easy, especially for non-experts, to overestimate the importance of QoS. That’s why I’m not convinced – though I could be, with more evidence – that QoS is a strong argument against net neutrality rules.

Analog Hole Bill Requires "Open and Public" Discussion of Secret Technology

Today I want to return to the Sensenbrenner-Conyers analog hole bill, which would impose a secret law – a requirement that all devices that accept analog video inputs must implement a secret technical specification for something called a VEIL detector. If you want to see this specification, you have to pay a $10,000 fee to a private company and you have to promise not to tell anyone about the technology. It’s pretty disturbing that our representatives would propose this kind of secret law.

But what is really odd about the secret technology is that the bill itself seems to assume that it is not secret. Consider, for example, Section 105:

If, upon the petition of any interested party, the Director of the Patent and Trademark Office determines that [VEIL] has become materially ineffective in a way that cannot be adequately remedied by existing technical flexibility in the embedding functions of [VEIL], then the Director may by rule adopt commercially reasonable improvements to the detection function of [VEIL] in order to maintain the functionality of the rights signaling system under this Act. Any such improvements shall be limited to adjustments or upgrades solely to the same underlying VEIL technology …

In [the above-described rulemaking], the Director … shall encourage representatives of the film industry, the broadcast, cable, and satellite industry, the information technology industry, and the consumer electronics industry to negotiate in good faith in an effort to reach agreement on the … improvements to [VEIL] to be adopted in the rule. The Director shall ensure that such negotiation process is open and public and that all potentially affected parties are invited to participate in the process through public notice. The Director shall cause any agreement for which there is substantial consensus of the parties on all material points to be published and shall take such agreement into account in any final rule adopted.

This process cannot be “open and public”, and an agreement on how the VEIL technology should be changed cannot be published, if the VEIL technology is secret. You can’t have a negotiation about how VEIL might be fixed, if the parties to that negotiation have promised not to disclose how VEIL works. And you can’t meaningfully invite members of the public to participate in the negotiation if they aren’t allowed to know about the subject being negotiated.

But that’s not all. The rulemaking will happen if somebody files a petition that convinces the Patent Office that VEIL “has become materially ineffective in a way that cannot be adequately remedied by existing technical flexibility in the embedding function” of VEIL.

The embedding function of VEIL is the gizmo that puts VEIL watermarks into video that is going to be distributed. It is separate from the detection function, which detects the presence or absence of a VEIL watermark in video content. The bill mandates that all analog video devices must include the detection function, so it is the detection function that one could learn about by paying the fee and taking the secrecy pledge.

But the embedding function of VEIL is entirely secret, and is not being revealed even to people who pay the fee and take the pledge. As far as I know, there is no way at all for anyone other than the VEIL company to find out how the embedding function works, or what kind of “existing technical flexibility” it might have. How anyone could petition the Patent Office on that subject is a mystery.

In short, the rulemaking procedure in Section 105 is entirely inconsistent with the secrecy of VEIL. How it got into the bill is therefore a pretty interesting question. Reading the bill, one gets the impression that it was assembled from prefab parts, rather than reflecting a self-consistent vision of how a technology mandate might actually work.

AOL, Yahoo Challenge Email Neutrality

AOL and Yahoo will soon start using Goodmail, a system that lets bulk email senders bypass the companies’ spam filters by paying the companies one-fourth of a cent per message, and promising not to send unsolicited messages, according to a New York Times story by Saul Hansell.

Pay-to-send systems are one standard response to spam. The idea is that raising the cost of sending a message will deter the kind of shot-in-the-dark spamming that sends a pitch to everybody in the hope that somebody, somewhere, will respond. The price should be high enough to deter spamming but low enough that legitimate email won’t be deterred. Or so the theory goes.

What’s different here is that senders aren’t paying for delivery, but for an exemption from the email providers’ spam filters. As Eric Rescorla notes, this system creates interesting incentives for the providers. For instance, the providers will have an incentive to make their spam filters overly stringent – so that legitimate messages will be misclassified as spam, and senders will be more likely to pay for an exemption from the filters.

There’s an interesting similarity here to the network neutrality debate. Net neutrality advocates worry that residential ISPs will discriminate against some network traffic so that they can charge web sites and services a fee in exchange for not discriminating against their traffic. In the email case, the worry is that email providers will discriminate against commercial email, so that they can charge email senders a fee in exchange for not discriminating against their messages.

Is this really the same policy problem? If you advocate neutrality regulations on ISPs, does consistency require you to advocate neutrality regulations on email providers? Considering these questions may shed a little light on both issues.

My tentative reaction to the email case is that this may or may not be a smart move by AOL and Yahoo, but they ought to be free to try it. If customers get fewer of the commercial email messages they want (and don’t get enough reduction in spam to make up for it), they’ll be less happy with AOL and Yahoo, and some will take their business elsewhere. The key point, I think, is that customers have realistic alternatives they can switch to. Competition will protect them.

(You may object that switching email providers is costly for a customer who has been using an aol.com or yahoo.com email address – if he switches email providers, his old email address might not work any more. True enough, but a rational email provider will already be exploiting this lock-in, perhaps by charging the customer a slightly higher fee than he would pay elsewhere.)

Competition is a key issue – perhaps the most important one – in the net neutrality debate too. If commercial ISPs face real competition, so that users have realistic alternatives to an ISP who misbehaves, then ISPs will have to heed their customers’ demand for neutral access to sites and services. But if ISPs have monopoly power, their incentives may drive them to behave badly.

To me, the net neutrality issue hinges largely on whether the residential ISP market will be competitive. I can’t make a clear prediction, but I know that there are people who probably can. I’d love to hear what they have to say.

What does seem clear is that regulatory policy can help or hinder the emergence of competition. Enabling competition should be a primary goal of our future telecom regulation.

Report: Many Apps Misconfigure Security Settings

My fellow Princeton computer scientists Sudhakar Govindavajhala and Andrew Appel released an eye-opening report this week on access control problems in several popular applications.

In the old days, operating systems had simple access control mechanisms. In Unix, each file belonged to an owner and a (single) group of users. The owner had the option to give the other group members read and/or write permission, and the option to give everybody read and/or write permission. That was pretty much it.

Over time, things have gotten more complicated. Windows controls access to about fifteen types of objects, with about thirty different flavors of privileges that can each be granted or denied, for any object, to any user or group of users. Privileges can be managed with great precision. In theory, this lets people grant others the absolute minimum privileges they need to do their jobs, which is good security practice.

The downside of this complexity is that if the system is hard to understand, people will make mistakes. End users will surely make mistakes. But you might think that big software companies can manage this complexity and will get the security settings on their products right.

Which brings us to Sudhakar and Andrew’s research. They built an automated tool to analyze the access control settings on files, registry entries, and other objects on a Windows machine. The tool looks at the settings on the machine and applies a set of inference rules that encode the various ways a user could try to leverage his privileges improperly. For example, one rule says that if Alice has the privilege to modify a program, and Bob runs that program, then Alice can use any of Bob’s privileges. (She can do this by adding code to the program that does what she wants; when Bob runs the program, that code will run with Bob’s privileges.) The tool looks for privilege escalation attacks, or ways for a relatively unprivileged user to gain more privilege.

Sudhakar and Andrew ran the tool on professionally-managed Windows systems, and the results were sobering. Several popular applications, from companies like Adobe, AOL, Macromedia, and Microsoft, had misconfigured their access control in ways that allowed relatively unprivileged users – in some cases even the lowliest Guest account – to gain full control of the system.

Sudhakar and Andrew notified the affected vendors well before publishing the paper, and some of the problems they found have been patched. But some problems remain, and testing on new systems tends to find still more problems.

There are two lessons here. First, complicated security mechanisms lead to mistakes, even among relatively sophisticated software developers and companies, so the desire to control privileges precisely must be tempered by the virtue of simplicity. Second, if you’re going to have a complicated system, you probably need tools to help you figure out whether you’re using it safely.