April 23, 2024

Archives for June 2006

Adobe Scares Microsoft with Antitrust Threat?

Microsoft has changed the next versions of Windows and Office after antitrust lawsuit threats from Adobe, according to Ina Fried’s article at news.com. Here’s a summary of Microsoft’s changes:

[Microsoft] is making two main changes. With Vista [the next version of Windows], it plans to give computer makers the option of dropping some support for XPS, Microsoft’s fixed-format document type that some have characterized as a PDF-killer. Under the changes, Microsoft will still use XPS under the hood to help the operating system print files. But computer makers won’t have to include the software that allows users to view XPS files or to save documents as XPS files.

[…]

On the Office side, Microsoft plans to take out of Office 2007 a feature that allows documents to be saved in either XPS or PDF formats. However, consumers will be able to go to Microsoft’s Web site and download a patch that will add those capabilities back in.

The obvious comparison here is to the Microsoft’s tactics in the browser market, which were the basis of the big antitrust suit brought by the U.S. Department of Justice in 1998. (I worked closely with the DOJ on that case. I testified twice as DOJ’s main technical expert witness, and I provided other advice to the DOJ before, during, and after the trial. I’m still bound by a confidentiality agreement, so I’ll stick to public information here.)

Critics of the DOJ case often misstate DOJ’s arguments. DOJ did not object to Microsoft making its Internet Explorer (IE) browser available to customers who wanted it. With respect to the bundling of IE, DOJ objected to (a) contracts forbidding PC makers and end users from removing IE, (b) technical measures (not justifiable for engineering reasons) to block end users from removing IE, and (c) technical measures (not justifiable) designed to frustrate users of alternative browsers. DOJ argued that Microsoft took these steps to maintain its monopoly power in the market for PC operating systems. The courts largely accepted these arguments.

Microsoft has reportedly made two changes at Adobe’s behest. The first change, allowing PC makers to remove Vista’s XPS printing feature, may be consistent with the DOJ/IE analogy. By hardwiring IE into Windows, Microsoft raised the cost to PC makers of offering an alternative browser. Depending on how the XPS feature was provided, this may have been the case with XPS printing too. There is at least a plausible argument that allowing PC makers to unbundle XPS printing would enhance competition.

What may differ from the DOJ/IE situation is the relationship between the OS market and the other product (browser or portable document) market. With browsers, there was a pretty convincing argument that by suppressing alternative browsers, Microsoft was helping to entrench its OS monopoly power, because of the likelihood that a rival browser would evolve into a platform for application development, thereby reducing lock-in in the OS market. It’s not clear whether there is an analogous argument for portable document formats – and if there’s not an argument that tying XPS to Windows hurts consumers somewhere else, then perhaps it’s okay to let Microsoft bundle XPS printing with Vista.

The other action by Microsoft, distributing XPS/PDF printing functionality separately from Office (via download only), isn’t so close to the arguments in the DOJ case. Recall that DOJ was willing to let Microsoft ship IE with Windows if the customer wanted it that way, as long as there was a way for customers (including PC makers) to get rid of IE if they didn’t want it, and as long as Microsoft didn’t try to interfere with customers’ ability to use competing browsers. The analogy here would be if Microsoft allowed PC makers and customers to disable the XPS/PDF functionality in office – for example, if they liked a competing print-to-PDF product better – and didn’t try to interfere with competing products.

The point of all this should not be to handcuff Microsoft, but to protect the ability of PC makers and end users to choose between Microsoft products and competing products.

Twenty-First Century Wiretapping: Content-Based Suspicion

Yesterday I argued that allowing police to record all communications that are flagged by some automated algorithm might be reasonable, if the algorithm is being used to recognize the voice of a person believed (for good reason) to be a criminal. My argument, in part, was that that kind of wiretapping would still be consistent with the principle of individualized suspicion, which says that we shouldn’t wiretap someone unless we have strong enough reason to suspect them, personally, of criminality.

Today, I want to argue that there are cases where even individualized suspicion isn’t necessary. I’ll do so by introducing yet another hypothetical.

Suppose we have reliable intelligence that al Qaeda operatives have been instructed to use a particular verbal handshake to identify each other. Operatives will prove they were members of al Qaeda by carrying out some predetermined dialog that is extremely unlikely to occur naturally. Like this, for instance:

First Speaker: The Pirates will win the World Series this year.
Second Speaker: Yes, and Da Vinci Code is the best movie ever made.

The police ask us for permission to run automated voice recognition algorithms on all phone conversations, and to record all conversations that contain this verbal handshake. Is it reasonable to give permission?

If the voice recognition is sufficiently accurate, this could be reasonable – even though the wiretapping is not based on advance suspicion of any particular individual. Suspicion is based not on the identity of the individuals speaking, but on the content of the communication. (You could try arguing that the content causes individualized suspicion, at the moment it is analyzed, but if you go that route the individualized suspicion principle doesn’t mean much anymore.)

Obviously we wouldn’t give the police carte blanche to use any kind of content-based suspicion whenever they wanted. What makes this hypothetical different is that the suspicion, though content-based, is narrowly aimed and is based on specific evidence. We have good reason to believe that we’ll be capturing some criminal conversations, and that we won’t be capturing many noncriminal ones. This, I think, is the general principle: intercepted communications may only be made known to a human based on narrowly defined triggers (whether individual-based or content-based), and those triggers must be justified based on specific evidence that they will be fruitful but not overbroad.

You might argue that if the individualized suspicion principle has been good enough for the past [insert large number] years, it should be good enough for the future too. But I think this argument misses an important consequence of changing technology.

Back before the digital revolution, there were only two choices: give the police narrow warrants to search or wiretap specific individuals or lines, or give the police broad discretion to decide whom to search or wiretap. Broad discretion was problematic because the police might search too many people, or might search people for the wrong reasons. Content-based triggering, where a person got to overhear the conversation only if its content satisfied specific trigger rules, was not possible, because the only way to tell whether the trigger was satisfied was to have a person listen to the conversation. And there was no way to unlisten to that conversation if the trigger wasn’t present. Technology raises the possibility that automated algorithms can implement triggering rules, so that content-based triggers become possible – in theory at least.

Given that content-based triggering was infeasible in the past, the fact that traditional rules don’t make provision for it does not, in itself, end the argument. This is the kind of situation that needs to be evaluated anew, with proper respect for traditional principles, but also with an open mind about how those principles might apply to our changed circumstances.

By now I’ve convinced you, I hope, that there is a plausible argument in favor of allowing government to wiretap based on content-based triggers. There are also plausible arguments against. The strongest ones, I think, are (1) that content-based triggers are inconsistent with the current legal framework, (2) that content-based triggers will necessarily make too many false-positive errors and thereby capture too many innocent conversations, and (3) that the infrastructure required to implement content-based triggers creates too great a risk of abuse. I’ll wrap up this series with three more posts, discussing each of these arguments in turn.

Twenty-First Century Wiretapping: Recognition

For the past several weeks I’ve been writing, on and off, about how technology enables new types of wiretapping, and how public policy should cope with those changes. Having laid the groundwork (1; 2; 3; 4; 5) we’re now ready for to bite into the most interesting question. Suppose the government is running, on every communication, some algorithm that classifies messages as suspicious or not, and that every conversation labeled suspicious is played for a government agent. When, if ever, is government justified in using such a scheme?

Many readers will say the answer is obviously “never”. Today I want to argue that that is wrong – that there are situations where automated flagging of messages for human analysis can be justified.

A standard objection to this kind of algorithmic triggering is that authority to search or wiretap must be based on individualized suspicion, that is, that there must be sufficient cause to believe that a specific individual is involved in illegal activity, before that individual can be wiretapped. To the extent that that is an assertion about current U.S. law, it doesn’t answer my question – recall that I’m writing here about what the legal rules should be, not what they are. Any requirement of individualized suspicion must be justified on the merits. I understand the argument for it on the merits. All I’m saying is that that argument doesn’t win by default.

One reason it shouldn’t win by default is that individualized suspicion is sometimes consistent with algorithmic recognition. Suppose that we have strong cause to believe that Mr. A is planning to commit a terrorist attack or some other serious crime. This would justify tapping Mr. A’s phone. And suppose we know Mr. A is visiting Chicago but we don’t know exactly where in the city he is, and we expect him to make calls on random hotel phones, pay phones, and throwaway cell phones. Suppose further that the police have good audio recordings of Mr. A’s voice.

The police propose to run automated voice recognition software on all phone calls in the Chicago area. When the software flags a recording as containing Mr. A’s voice, that recording will be played for a police analyst, and if the analyst confirms the voice as Mr. A’s, the call will be recorded. The police ask us, as arbiters of the public good, for clearance to do this.

If we knew that the voice recognition algorithm would be 100% accurate, then it would be hard to object to this. Using an automated algorithm would be more consistent with the principle of individualized suspicion than would be the traditional approach of tapping Mr. A’s home phone. His home phone, after all, might be used by an innocent family member or roommate, or by a plumber working in his house

But of course voice recognition is not 100% accurate. It will miss some of Mr. A’s calls, and it will incorrectly flag some calls by others. How serious a problem is this? It depends on how many errors the algorithm makes. The traditional approach sometimes records innocent people – others might use Mr. A’s phone, or Mr. A might turn out to be innocent after all – and these errors make us cautious about wiretapping but don’t preclude wiretapping if our suspicion of Mr. A is strong enough. The same principle ought to hold for automated voice recognition. We should be willing to accept some modest number of errors, but if errors are more frequent we ought to require a very strong argument that recording Mr. A’s phone calls is of critical importance.

In practice, we would want to set out crisply defined criteria for making these determinations, but we don’t need to do that exercise here. It’s enough to observe that given sufficiently accurate voice recognition technology – which might exist some day – algorithmically triggered recording can be (a) justified, and (b) consistent with the principle of individualized suspicion.

But can algorithmic triggering be justified, even if not based on individualized suspicion? I’ll argue next time that it can.