March 19, 2024

Archives for January 2009

District Court Ruling in MDY v. Blizzard

Today, an Arizona District Court issued its ruling in the MDY v. Blizzard case, which involves contract, copyright, and DMCA claims. The claims addressed at trial were fairly limited because the Court entered summary judgment on several claims last summer. In-court comments by lawyers suggest that the case is headed toward appeal in the Ninth Circuit. Since I served as an expert witness in the case, I’ll withhold comment in this forum at this time, but readers are free to discuss it.

The Supreme Court and Software Patents

I’m very excited that Doug Lichtman, a sharp law professor at UCLA, has decided to take up podcasting. His podcast, Intellectual Property Colloquium, features monthly, in-depth discussions of copyright and patent law. The first installment (mp3) featured a lively discussion between Lichtman and EFF’s inimitable Fred von Lohmann about the Cablevision decision and its implications for copyright law. November’s episode focused on In Re Bilski, the widely-discussed decision by the United States Court of Appeals for the Federal Circuit limiting patents on abstract concepts like software and business methods. The podcast featured two law professors, John Duffy (who argued the Bilski case before the Federal Circuit) and Rob Merges.

As I noted at the time it was decided, people care about Bilski largely because of what it says about legality of software patents. Software patents are intensely controversial, with many geeks arguing that the software industry would be better off without them. What I found striking about the conversation was that both guests (and perhaps the host, although he didn’t tip his hand as much) took it as self-evident that there needed to be patents on software and business methods. As one of the guests (I couldn’t tell if it was Merges and Duffy, but they seemed to largely agree) said around minute 47:

The easiest criticism of the [Bilski] opinion is that it invites this kind of somewhat pointless metaphysical investigation. What you say is “look, I’ve got an invention, I wrote some code, I’d like to a patent for that.” Why do we have to play this kind of sophomoric philosophical game of “well, what changes in the real world when my code runs?” The [Supreme Court] case law arose fairly early in the information technology revolution. We’re kind of stuck with this artifactual, residual overhang of physicality. It’s just the price we have to pay to get a software patent these days. Someday maybe it will drop away or wither away, but that’s where we find ourselves now.

On this view, the Supreme Court’s historical hostility toward patents on software is merely an historical accident—a “residual overhang” that we’d do well to get beyond. Guided by a strong policy preference for the patentability of software and business methods, Duffy and Merges seem to feel that the Federal Circuit should give little weight to Supreme Court decisions that they regard as out of touch with the modern realities of the software industry. After all, this is “just the price we have to pay to get a software patent these days.”

I don’t agree with this perspective. I’ve long sympathized with software patent critics such as Ben Klemens who argue that the Supreme Court’s precedents place clear limits on the patenting of software. But I thought it would be interesting to take a closer look at the Supreme Court’s classic decisions and talk to some patent scholars to see if I can understand why there are such divergent opinions about the Supreme Court’s jurisprudence. The result is a new feature article for Ars Technica, where I review the Supreme Court’s classic trilogy of software patent cases and ponder how those cases should be applied to the modern world.

Like most Supreme Court decisions, these three opinions are not the clearest in the world. The justices, like most of the legal profession, seem slightly confused about the relationships among mathematical algorithms, software, and computer programs. It’s certainly possible to find phrases in these cases that support either side of the software patent debate. However, a clear theme emerges from all three cases: mathematics is ineligible for patent protection, and software algorithms are mathematics. The high court struggled with what to do in cases where software is one part of an otherwise-patentable machine. But it’s hard to avoid the conclusion that many of the “pure” software patents that have generated so much controversy in recent years cannot be reconciled with the Supreme Court’s precedents. For example, it’s hard to read those precedents in a way that would allow Amazon’s famous “one-click” patent.

I also argue that this result is a good one from a public policy perspective. Software has several important properties that make it fundamentally different than the other categories of now-patentable subject matter. As Klemens points out, almost every significant firm has an IT department that creates software. That means that every significant firm is a potential target for software patent lawsuits. This is a very different situation than, say, pharmaceutical patents which only affect a tiny fraction of the American economy. Second, software is already eligible for copyright protection, rendering software patents largely redundant. Most important, we now have 15 years of practical experience with software patents, and the empirical results have not been encouraging. I don’t think it’s a coincidence that the explosion of patent litigation over the last fifteen years has been concentrated in the software industry.

As the Federal Circuit struggles to craft new rules for patent-eligible software patents, it should take a close look at the far more restrictive rules for patent eligibility that were applied in the 1970s and early 1980s.

The (Ironic) Best Way to Make the Bailout Transparent

The next piece of proposed bailout legislation is called the American Recovery and Reinvestment Act of 2009. Chris Soghoian, who is covering the issue on his Surveillance State blog at CNET, brought the bill to my attention, particularly a provision requiring that a new web site called Recovery.gov “provide data on relevant economic, financial, grant, and contract information in user-friendly visual presentations to enhance public awareness of the use funds made available in this Act.” As a group of colleagues and I suggested last year in Government Data and the Invisible Hand, there’s an easy way to make rules like this one a great deal more effective.

Ultimately, we all want information about bailout spending to be available in the most user-friendly way to the broadest range of citizens. But is a government monopoly on “presentations” of the data the best way to achieve that goal? Probably not. If Congress orders the federal bureaucracy to provide a web site for end users, then we will all have to live with the one web site they cook up. Regular citizens would have more and better options for learning about the bailout if Congress told the executive branch to provide the relevant data in a structured machine-readable format such as XML, so many sites can be made to analyze the data. (A government site aimed at end users would also be fine. But we’re only apt to get machine-readable data if Congress makes it a requirement.)

Why does this matter? Because without the underlying data, anyone who wants to provide a useful new tool for analysis must first try to reconstruct the underlying numbers from the “user-friendly visual presentations” or “printable reports” that the government publishes. Imagine trying to convert a nice-looking graph back into a list of figures, or trying to turn a printed transcript of a congressional debate into a searchable database of who said what and when. It’s not easy.

Once the computer-readable data is out there—whether straightforwardly published by the government officials who have it in the first place, or painstakingly recreated by volunteers who don’t—we know that a small army of volunteers and nonprofits stands ready to create tools that regular citizens, even those with no technical background at all, will find useful. This group of volunteers is itself a small constituency, but the things they make, like Govtrack, Open Congress, and Washington Watch, are used by a much broader population of interested citizens. The federal government might decide to put together a system for making maps or graphs. But what about an interactive one like this? What about three-dimensional animated visualizations over time? What about an interface that’s specially designed for blind users, who still want to organize and analyze the data but may be unable to benefit as most of us can from visualizations? There might be an interface in Spanish, the second most common American language, but what about one in Tagalog, the sixth most common?

There’s a deep and important irony here: The best way for government data to reach the broadest possible population is probably to release it in a form that nobody wants to read. XML files are called “machine-readable” because they make sense to a computer, rather than to human eyes. Releasing the data that way—so a variety of “user-friendly presentations,” to match the variety of possible users, can emerge—is what will give regular citizens the greatest power to understand and react to the bailout. It would be a travesty to make government the only source for interaction with bailout data—the transparency equivalent of central planning. It would be better for everyone, and easier, to let a thousand mashups bloom.

CA SoS Bowen sends proposals to EAC

California Secretary of State Debra Bowen has sent a letter to Chair Gineen Beach of the US Election Assistance Commission (EAC) outlining three proposals that she thinks will markedly improve the integrity of voting systems in the country.

I’ve put a copy of Bowen’s letter here (87kB PDF).

Bowen’s three proposals are:

  • Vulnerability Reporting — The EAC should require that vendors disclose vulnerabilities, flaws, problems, etc. to the EAC as the system certification authority and to all the state election directors that use the affected equipment.
  • Uniform Incident Reporting — The EAC should create and adopt procedures that jurisdictions can follow to collect and report data about incidents they experience with their voting systems.
  • Voting System Performance Measurement — As part of the Election Day Survey, the EAC should systematically collect data from election officials about how voting systems perform during general elections.

In my opinion, each of these would be a welcome move for the EAC.

These proposals would put into place a number of essential missing elements of administering computerized elections equipment. First, for the users of these systems, election officials, it can be extremely frustrating and debilitating if they suspect that some voting system flaw is responsible for problems they’re experiencing. Often, when errors arise, contingency planning requires detailed knowledge about specific details of a voting system flaw. Without knowing as much as possible about the problem they’re facing, election officials can exacerbate the problem. At best, not knowing about a potential flaw can do what Bowen describes: doom the election official, and others with the same equipment, to repeatedly encounter the flaw in subsequent elections. Of course, vendors are the most likely to have useful information on a given flaw, and they should be required to report this information to both the EAC and election officials.

Often the most information we have about voting system incidents come from reports from local journalists. These reporters don’t tend to cover high technology too often; their reports are often incomplete and in many cases simply and obviously incorrect. Having a standardized set of elements that an election official can collect and report about voting system incidents will help to ensure that the data comes directly from those experiencing a given problem. The EAC should design such procedures and then a system for collecting and reporting these issues to other election officials and the public.

Finally, many of us were disappointed to learn that the 2008 Election Day survey would not include questions about voting system performance. Election Day is a unique and hard-to-replicate event where very little systematic data is collected about voting machine performance. The OurVoteLive and MyVote1 efforts go a long way towards actionable, qualitative data that can help to increase enfranchisement. However, self-reported data from the operators of the machinery of our democracy would be a gold mine in terms of identifying and examining trends in how this machinery performs, both good and bad.

I know a number of people, including Susannah Goodman at Common Cause as well as John Gideon and Ellen Theisen of VotersUnite!, who have been championing one or another of these proposals in their advocacy. The fact that Debra Bowen has penned this letter is a testament to the reason behind their efforts.

DRM In Retreat

Last week’s agreement between Apple and the major record companies to eliminate DRM (copy protection) in iTunes songs marks the effective end of DRM for recorded music. The major online music stores are now all DRM-free, and CDs still lack DRM, so consumers who acquire music will now expect it without DRM. That’s a sensible result, given the incompatibility and other problems caused by DRM, and it’s a good sign that the record companies are ready to retreat from DRM and get on with the job of reinventing themselves for the digital world.

In the movie world, DRM for stored content may also be in trouble. On DVDs, the CSS DRM scheme has long been a dead letter, technologically speaking. The Blu-ray scheme is better, but if Blu-ray doesn’t catch on, this doesn’t matter.

Interestingly, DRM is not retreating as quickly in systems that stream content on demand. This makes sense because the drawbacks of DRM are less salient in a streaming context: there is no need to maintain compatibility with old content; users can be assumed to be online so software can be updated whenever necessary; and users worry less about preserving access when they know they can stream the content again later. I’m not saying that DRM causes no problems with streaming, but I do think the problems are less serious than in a stored-content setting.

In some cases, streaming uses good old fashioned incompatibility in place of DRM. For example, a stream might use a proprietary format and the most convenient software for watching streams might lack a “save this video” button.

It remains to be seen how far DRM will retreat. Will it wither away entirely, or will it hang on in some applications?

Meanwhile, it’s interesting to see traditional DRM supporters back away from it. RIAA chief Mitch Bainwol now says that the RIAA is agnostic on DRM. And DRM cheerleader Bill Rosenblatt has relaunched his “DRM Watch” blog under the new title “Copyright and Technology“. The new blog’s first entry: iTunes going DRM-free.