Today, an Arizona District Court issued its ruling in the MDY v. Blizzard case, which involves contract, copyright, and DMCA claims. The claims addressed at trial were fairly limited because the Court entered summary judgment on several claims last summer. In-court comments by lawyers suggest that the case is headed toward appeal in the Ninth Circuit. Since I served as an expert witness in the case, I’ll withhold comment in this forum at this time, but readers are free to discuss it.
I’m very excited that Doug Lichtman, a sharp law professor at UCLA, has decided to take up podcasting. His podcast, Intellectual Property Colloquium, features monthly, in-depth discussions of copyright and patent law. The first installment (mp3) featured a lively discussion between Lichtman and EFF’s inimitable Fred von Lohmann about the Cablevision decision and its implications for copyright law. November’s episode focused on In Re Bilski, the widely-discussed decision by the United States Court of Appeals for the Federal Circuit limiting patents on abstract concepts like software and business methods. The podcast featured two law professors, John Duffy (who argued the Bilski case before the Federal Circuit) and Rob Merges.
As I noted at the time it was decided, people care about Bilski largely because of what it says about legality of software patents. Software patents are intensely controversial, with many geeks arguing that the software industry would be better off without them. What I found striking about the conversation was that both guests (and perhaps the host, although he didn’t tip his hand as much) took it as self-evident that there needed to be patents on software and business methods. As one of the guests (I couldn’t tell if it was Merges and Duffy, but they seemed to largely agree) said around minute 47:
The easiest criticism of the [Bilski] opinion is that it invites this kind of somewhat pointless metaphysical investigation. What you say is “look, I’ve got an invention, I wrote some code, I’d like to a patent for that.” Why do we have to play this kind of sophomoric philosophical game of “well, what changes in the real world when my code runs?” The [Supreme Court] case law arose fairly early in the information technology revolution. We’re kind of stuck with this artifactual, residual overhang of physicality. It’s just the price we have to pay to get a software patent these days. Someday maybe it will drop away or wither away, but that’s where we find ourselves now.
On this view, the Supreme Court’s historical hostility toward patents on software is merely an historical accident—a “residual overhang” that we’d do well to get beyond. Guided by a strong policy preference for the patentability of software and business methods, Duffy and Merges seem to feel that the Federal Circuit should give little weight to Supreme Court decisions that they regard as out of touch with the modern realities of the software industry. After all, this is “just the price we have to pay to get a software patent these days.”
I don’t agree with this perspective. I’ve long sympathized with software patent critics such as Ben Klemens who argue that the Supreme Court’s precedents place clear limits on the patenting of software. But I thought it would be interesting to take a closer look at the Supreme Court’s classic decisions and talk to some patent scholars to see if I can understand why there are such divergent opinions about the Supreme Court’s jurisprudence. The result is a new feature article for Ars Technica, where I review the Supreme Court’s classic trilogy of software patent cases and ponder how those cases should be applied to the modern world.
Like most Supreme Court decisions, these three opinions are not the clearest in the world. The justices, like most of the legal profession, seem slightly confused about the relationships among mathematical algorithms, software, and computer programs. It’s certainly possible to find phrases in these cases that support either side of the software patent debate. However, a clear theme emerges from all three cases: mathematics is ineligible for patent protection, and software algorithms are mathematics. The high court struggled with what to do in cases where software is one part of an otherwise-patentable machine. But it’s hard to avoid the conclusion that many of the “pure” software patents that have generated so much controversy in recent years cannot be reconciled with the Supreme Court’s precedents. For example, it’s hard to read those precedents in a way that would allow Amazon’s famous “one-click” patent.
I also argue that this result is a good one from a public policy perspective. Software has several important properties that make it fundamentally different than the other categories of now-patentable subject matter. As Klemens points out, almost every significant firm has an IT department that creates software. That means that every significant firm is a potential target for software patent lawsuits. This is a very different situation than, say, pharmaceutical patents which only affect a tiny fraction of the American economy. Second, software is already eligible for copyright protection, rendering software patents largely redundant. Most important, we now have 15 years of practical experience with software patents, and the empirical results have not been encouraging. I don’t think it’s a coincidence that the explosion of patent litigation over the last fifteen years has been concentrated in the software industry.
As the Federal Circuit struggles to craft new rules for patent-eligible software patents, it should take a close look at the far more restrictive rules for patent eligibility that were applied in the 1970s and early 1980s.
The next piece of proposed bailout legislation is called the American Recovery and Reinvestment Act of 2009. Chris Soghoian, who is covering the issue on his Surveillance State blog at CNET, brought the bill to my attention, particularly a provision requiring that a new web site called Recovery.gov “provide data on relevant economic, financial, grant, and contract information in user-friendly visual presentations to enhance public awareness of the use funds made available in this Act.” As a group of colleagues and I suggested last year in Government Data and the Invisible Hand, there’s an easy way to make rules like this one a great deal more effective.
Ultimately, we all want information about bailout spending to be available in the most user-friendly way to the broadest range of citizens. But is a government monopoly on “presentations” of the data the best way to achieve that goal? Probably not. If Congress orders the federal bureaucracy to provide a web site for end users, then we will all have to live with the one web site they cook up. Regular citizens would have more and better options for learning about the bailout if Congress told the executive branch to provide the relevant data in a structured machine-readable format such as XML, so many sites can be made to analyze the data. (A government site aimed at end users would also be fine. But we’re only apt to get machine-readable data if Congress makes it a requirement.)
Why does this matter? Because without the underlying data, anyone who wants to provide a useful new tool for analysis must first try to reconstruct the underlying numbers from the “user-friendly visual presentations” or “printable reports” that the government publishes. Imagine trying to convert a nice-looking graph back into a list of figures, or trying to turn a printed transcript of a congressional debate into a searchable database of who said what and when. It’s not easy.
Once the computer-readable data is out there—whether straightforwardly published by the government officials who have it in the first place, or painstakingly recreated by volunteers who don’t—we know that a small army of volunteers and nonprofits stands ready to create tools that regular citizens, even those with no technical background at all, will find useful. This group of volunteers is itself a small constituency, but the things they make, like Govtrack, Open Congress, and Washington Watch, are used by a much broader population of interested citizens. The federal government might decide to put together a system for making maps or graphs. But what about an interactive one like this? What about three-dimensional animated visualizations over time? What about an interface that’s specially designed for blind users, who still want to organize and analyze the data but may be unable to benefit as most of us can from visualizations? There might be an interface in Spanish, the second most common American language, but what about one in Tagalog, the sixth most common?
There’s a deep and important irony here: The best way for government data to reach the broadest possible population is probably to release it in a form that nobody wants to read. XML files are called “machine-readable” because they make sense to a computer, rather than to human eyes. Releasing the data that way—so a variety of “user-friendly presentations,” to match the variety of possible users, can emerge—is what will give regular citizens the greatest power to understand and react to the bailout. It would be a travesty to make government the only source for interaction with bailout data—the transparency equivalent of central planning. It would be better for everyone, and easier, to let a thousand mashups bloom.
California Secretary of State Debra Bowen has sent a letter to Chair Gineen Beach of the US Election Assistance Commission (EAC) outlining three proposals that she thinks will markedly improve the integrity of voting systems in the country.
I’ve put a copy of Bowen’s letter here (87kB PDF).
Bowen’s three proposals are:
- Vulnerability Reporting — The EAC should require that vendors disclose vulnerabilities, flaws, problems, etc. to the EAC as the system certification authority and to all the state election directors that use the affected equipment.
- Uniform Incident Reporting — The EAC should create and adopt procedures that jurisdictions can follow to collect and report data about incidents they experience with their voting systems.
- Voting System Performance Measurement — As part of the Election Day Survey, the EAC should systematically collect data from election officials about how voting systems perform during general elections.
In my opinion, each of these would be a welcome move for the EAC.
These proposals would put into place a number of essential missing elements of administering computerized elections equipment. First, for the users of these systems, election officials, it can be extremely frustrating and debilitating if they suspect that some voting system flaw is responsible for problems they’re experiencing. Often, when errors arise, contingency planning requires detailed knowledge about specific details of a voting system flaw. Without knowing as much as possible about the problem they’re facing, election officials can exacerbate the problem. At best, not knowing about a potential flaw can do what Bowen describes: doom the election official, and others with the same equipment, to repeatedly encounter the flaw in subsequent elections. Of course, vendors are the most likely to have useful information on a given flaw, and they should be required to report this information to both the EAC and election officials.
Often the most information we have about voting system incidents come from reports from local journalists. These reporters don’t tend to cover high technology too often; their reports are often incomplete and in many cases simply and obviously incorrect. Having a standardized set of elements that an election official can collect and report about voting system incidents will help to ensure that the data comes directly from those experiencing a given problem. The EAC should design such procedures and then a system for collecting and reporting these issues to other election officials and the public.
Finally, many of us were disappointed to learn that the 2008 Election Day survey would not include questions about voting system performance. Election Day is a unique and hard-to-replicate event where very little systematic data is collected about voting machine performance. The OurVoteLive and MyVote1 efforts go a long way towards actionable, qualitative data that can help to increase enfranchisement. However, self-reported data from the operators of the machinery of our democracy would be a gold mine in terms of identifying and examining trends in how this machinery performs, both good and bad.
I know a number of people, including Susannah Goodman at Common Cause as well as John Gideon and Ellen Theisen of VotersUnite!, who have been championing one or another of these proposals in their advocacy. The fact that Debra Bowen has penned this letter is a testament to the reason behind their efforts.
Last week’s agreement between Apple and the major record companies to eliminate DRM (copy protection) in iTunes songs marks the effective end of DRM for recorded music. The major online music stores are now all DRM-free, and CDs still lack DRM, so consumers who acquire music will now expect it without DRM. That’s a sensible result, given the incompatibility and other problems caused by DRM, and it’s a good sign that the record companies are ready to retreat from DRM and get on with the job of reinventing themselves for the digital world.
In the movie world, DRM for stored content may also be in trouble. On DVDs, the CSS DRM scheme has long been a dead letter, technologically speaking. The Blu-ray scheme is better, but if Blu-ray doesn’t catch on, this doesn’t matter.
Interestingly, DRM is not retreating as quickly in systems that stream content on demand. This makes sense because the drawbacks of DRM are less salient in a streaming context: there is no need to maintain compatibility with old content; users can be assumed to be online so software can be updated whenever necessary; and users worry less about preserving access when they know they can stream the content again later. I’m not saying that DRM causes no problems with streaming, but I do think the problems are less serious than in a stored-content setting.
In some cases, streaming uses good old fashioned incompatibility in place of DRM. For example, a stream might use a proprietary format and the most convenient software for watching streams might lack a “save this video” button.
It remains to be seen how far DRM will retreat. Will it wither away entirely, or will it hang on in some applications?
Meanwhile, it’s interesting to see traditional DRM supporters back away from it. RIAA chief Mitch Bainwol now says that the RIAA is agnostic on DRM. And DRM cheerleader Bill Rosenblatt has relaunched his “DRM Watch” blog under the new title “Copyright and Technology“. The new blog’s first entry: iTunes going DRM-free.
The recount of the 2008 Minnesota Senate race gives us an opportunity to evaluate the accuracy of precinct-count optical-scan voting. Though there have been contentious disputes over which absentee ballot envelopes to open, the core technology for scanning ballots has proved to be extremely accurate.
The votes were counted by machine (except for part of one county that counts votes by hand), then every single ballot was examined by hand in the recount.
The “net” accuracy of optical-scan voting was 99.99% (see below).
The “gross” accuracy was 99.91% (see below).
The rate of ambiguous ballots was low, 99.99% unambiguous (see below).
My analysis is based on the official spreadsheet from the Minnesota Secretary of State. I commend the Secretary of State for his commitment to transparency in the form of making the data available in such an easy-to-analyze format. The vast majority of the counties use the ES&S M100 precinct-count optical-scanners; a few use other in-precinct scanners.
I exclude from this analysis all disputes over which absentee ballots to open. Approximately 10% of the ballots included in this analysis are optically scanned absentee ballots that were not subject to dispute over eligibility.
There were 2,423,851 votes counted for Coleman and Franken. The “net” error rate is the net change in the vote margin from the machine-scan to the hand recount (not including change related to qualification of absentee ballot envelopes). This was 264 votes, for an accuracy of 99.99% (error, one part in ten thousand).
The “gross” error rate is the total number of individual ballots either added to one candidate, or subtracted from one candidate, by the recount. A ballot that was changed from one candidate to the other will count twice, but such ballots are rare. In the precinct-by-precinct data, the vast majority of precincts have no change; many precincts have exactly one vote added to one candidate; few precincts have votes subtracted, or more than one vote added, or both.
The recount added a total of 1,528 votes to the candidates, and subtracted a total of 642 votes, for a gross change of 2170 (again, not including absentee ballot qualification). Thus, the “gross” error rate is about 1 in 1000, or a gross accuracy of 99.91%.
Ambiguous ballots: During the recount, the Coleman and Franken campaigns initially challenged a total of 6,655 ballot-interpretation decisions made by the human recounters. The State Canvassing Board asked the campaigns to voluntarily withdraw all but their most serious challenges, and in the end approximately 1,325 challenges remained. That is, approximately 5 ballots in 10,000 were ambiguous enough that one side or the other felt like arguing about it. The State Canvassing Board, in the end, classified all but 248 of these ballots as votes for one candidate or another. That is, approximately 1 ballot in 10,000 was ambiguous enough that the bipartisan recount board could not determine an intent to vote. (This analysis is based on the assumption that if the voter made an ambiguous mark, then this ballot was likely to be challenged either by one campaign or the other.)
Caveat: As with all voting systems, including optical-scan, DREs, and plain old paper ballots, there is also a source of error from voters incorrectly translating their intent into the marked ballot. Such error is likely to be greater than 0.1%, but the analysis I have done here does not measure this error.
Hand counting: Saint Louis County, which uses a mix of optical-scan and hand-counting, had a higher error rate: net accuracy 99.95%, gross accuracy 99.81%.
[Princeton's Woodrow Wilson School asked me to write a short essay on information technology challenges facing the Obama Administration, as part of the School's Inaugural activities. Here is my essay.]
Digital technologies can make government more effective, open and transparent, and can make the economy as a whole more flexible and efficient. They can also endanger privacy, disrupt markets, and open the door to cyberterrorism and cyberespionage. In this crowded field of risks and opportunities, it makes sense for the Obama administration to focus on four main challenges.
The first challenge is cybersecurity. Government must safeguard its own mission critical systems, and it must protect privately owned critical infrastructures such as the power grid and communications network. But it won’t be enough to focus only on a few high priority, centralized systems. Much of digital technology’s value—and, today, many of the threats—come from ordinary home and office systems. Government can use its purchasing power to nudge the private sector toward products that are more secure and reliable; it can convene standards discussions; and it can educate the public about basic cybersecurity practices.
The second challenge is transparency. We can harness the potential of digital technology to make government more open, leading toward a better informed and more participatory civic life. Some parts of government are already making exciting progress, and need high-level support; others need to be pushed in the right direction. One key is to ensure that data is published in ways that foster reuse, to support an active marketplace of ideas in which companies, nonprofits, and individuals can find the best ways to analyze, visualize, and “mash up” government information.
The third challenge is to maintain and increase America’s global lead in information technology, which is vital to our prosperity and our role in the world. While recommitting to our traditional strengths, we must work to broaden the reach of technology. We must bring broadband Internet connections to more Americans, by encouraging private-sector investment in high-speed network infrastructure. We must provide better education in information technology, no less than in science or math, to all students. Government cannot solve these problems alone, but can be a catalyst for progress.
The final challenge is to close the culture gap between politicians and technology leaders. The time for humorous anecdotes about politicians who “don’t get” technology, or engineers who are blind to the subtleties of Washington, is over. Working together, we can translate technological progress into smarter government and a more vibrant, dynamic private sector.
Related to my previous post about the future of open technologies, Tim Wu has a great review of Jonathan Zittrain’s book. Wu reviews the origins of the 20th century’s great media empires, which steadily consolidated once-fractious markets. He suggests that the Internet likely won’t meet the same fate. My favorite part:
In the 2000s, AOL and Time Warner took the biggest and most notorious run at trying to make the Internet more like traditional media. The merger was a bet that unifying content and distribution might yield the kind of power that Paramount and NBC gained in the 1920s. They were not alone: Microsoft in the 1990s thought that, by owning a browser (Explorer), dial-in service (MSN), and some content (Slate), it could emerge as the NBC of the Internet era. Lastly, AT&T, the same firm that built the first radio network, keeps signaling plans to assert more control over “its pipes,” or even create its own competitor to the Internet. In 2000, when AT&T first announced its plans to enter the media market, a spokesman said: “We believe it’s very important to have control of the underlying network.”
Yet so far these would-be Zukors and NBCs have crashed and burned. Unlike radio or film, the structure of the Internet stoutly resists integration. AOL tried, in the 1990s, to keep its users in a “walled garden” of AOL content, but its users wanted the whole Internet, and finally AOL gave in. To make it after the merger, AOL-Time Warner needed to build a new garden with even higher walls–some way for AOL to discriminate in favor of Time Warner content. But AOL had no real power over its users, and pretty soon it did not have many of them left.
I think the monolithic media firms of the 20th century ultimately owed their size and success to economies of scale in the communication technologies of their day. For example, a single newspaper with a million readers is a lot cheaper to produce and distribute than ten newspapers with 100,000 readers each. And so the larger film studios, newspapers, broadcast networks, and so on were able to squeeze out smaller players. Once one newspaper in a given area began reaping the benefits of scale, it made it difficult for its competitors to turn a profit, and a lot of them went out of business or got acquired at firesale prices.
On the Internet, distributing content is so cheap that economies of scale in distribution just don’t matter. On a per-reader basis, my personal blog certainly costs more to operate than CNN. But the cost is so small that it’s simply not a significant factor in deciding whether to continue publishing it. Even if the larger sites capture the bulk of the readership and advertising revenue, that doesn’t preclude a “long tail” of small, often amateur sites with a wide variety of different content.
Once people have a taste for what that openness allows, stuffing it back into a box is very difficult. Yes, it’s important to remain vigilant, and yes, people will always attempt to shut off that openness, citing all sorts of “dangers” and “bad things” that the openness allows. But, the overall benefits of the openness are recognized by many, many people — and the great thing about openness is that you really only need a small number of people who recognize its benefits to allow it to flourish.
Closed systems tend to look more elegant at first — and often they are much more elegant at first. But open systems adapt, change and grow at a much faster rate, and almost always overtake closed systems, over time. And, once they overtake the closed systems, almost nothing will allow them to go back. Even if it were possible to turn an open system like the web into a closed system, openness would almost surely sneak out again, via a new method by folks who recognized how dumb it was to close off that open system.
Predictions about the impending demise of open systems have been a staple of tech policy debates for at least a decade. Larry Lessig’s Code and Other Laws of Cyberspace is rightly remembered as a landmark work of tech policy scholarship for its insights about the interplay between “East Coast code” (law) and “West Coast code” (software). But people often forget that it also made some fairly specific predictions. Lessig thought that the needs of e-commerce would push the Internet toward a more centralized architecture: a McInternet that squeezed out free speech and online anonymity.
So far, at least, Lessig’s predictions have been wide of the mark. The Internet is still an open, decentralized system that allows robust anonymity and free speech. But the pessimistic predictions haven’t stopped. Most recently, Jonathan Zittrain wrote a book predicting the impending demise of the Internet’s “generativity,” this time driven by security concerns rather than commercialization.
It’s possible that these thinkers will be proven right in the coming years. But I think it’s more likely that these brilliant legal thinkers have been mislead by a kind of optical illusion created by the dynamics of the marketplace. The long-term trend has been a steady triumph for open standards: relatively open technologies like TCP/IP, HTTP, XML, PDF, Java, MP3, SMTP, BitTorrent, USB, and x86, and many others have become dominant in their respective domains. But at any given point in time, a disproportionate share of public discussion is focused on those sectors of the technology industry where open and closed platforms are competing head-to-head. After all, nobody wants to read news stories about, say, the fact that TCP/IP’s market share continues to be close to 100 percent and has no serious competition. And at least superficially, the competition between open and closed systems looks really lopsided: the proprietary options tend to be supported by large, deep-pocketed companies with large development teams, multi-million dollar advertising budgets, distribution deals with leading retailers, and so forth. It’s not surprising that people so frequently conclude that open standards are on the verge of getting crushed.
For example, Zittrain makes the iPhone a poster child for the flashy but non-generative devices he fears will come to dominate the market. And it’s easy to see the iPhone’s advantages. Apple’s widely-respected industrial design department created a beautiful product. Its software engineers created a truly revolutionary user interface. Apple and AT&T both have networks of retail stores with which to promote the iPhone, and Apple is spending millions of dollars airing television ads. On first glance, it looks like open technologies are on the ropes in the mobile marketplace.
But open technologies have a kind of secret weapon: the flexibility and power that comes from decentralization. The success of the iPhone is entirely dependent on Apple making good technical and business decisions, and building on top of proprietary platforms requires navigating complex licensing issues. In contrast, absolutely anyone can use and build on top of an open platform without asking anyone else for permission, and without worrying about legal problems down the line. That means that at any one time, you have a lot of different people trying a lot of different things on that open platform. In the long run, the creativity of millions of people will usually exceed that of a few hundred engineers at a single firm. As Mike says, opens systems adapt, change and grow at a much faster rate than closed ones.
Yet much of the progress of open systems tends to happen below the radar. The grassroots users of open platforms are far less likely to put out press releases or buy time for television ads. So often it’s only after an open technology has become firmly entrenched in its market—MySQL in the low-end database market, for example—that the mainstream press starts to take notice of it.
As a result, despite the clear trend toward open platforms in the past, it looks to many people like that pattern is going to stop and perhaps even be reversed. I think this illusion is particularly pronounced for folks who are getting their information second- or third-hand. If you’re judging the state of the technology industry from mainstream media stories, television ads, shelf space at Best Buy, etc, you’re likely not getting the whole story. It’s helpful to remember that open platforms have always looked like underdogs. They’re no more likely to be crushed today than they were in 1999, 1989, or 1979.
Satyam is one of the handful of large companies who dominate the IT outsourcing market in India, A week ago today, B. Ramalinga Raju, the company chairman, confessed to a years-long accounting fraud. More than a billion dollars of cash the company claimed to have on hand, and the business success that putatively generated those dollars, now appear to have been fictitious.
There are many tech policy issues here. For one, frauds this massive in high tech environments are a challenge and opportunity for computer forensics. For another, though we can hope this situation is unique, it may turn out to be the tip of an iceberg. If Satyam turns out to be part of a pattern of lax oversight and exaggerated profits across India’s high tech sector, it might alter the way we look at high tech globalization, forcing us to revise downward our estimates of high tech’s benefits in India. (I suppose it could be construed as a silver lining that such news might also reveal America, and other western nations, to be more globally competitive in this arena than we had believed them to be.)
But my interest in the story is more personal. I met Mr. Raju in early 2007, when Satyam helped organize and sponsor a delegation of American journalists to India. (I served as Managing Editor of The American at the time.) India’s tech sector wanted good press in America, a desire perhaps increased by the fact that Democrats who were sometimes skeptical of free trade had just assumed control of the House. It was a wonderful trip—we were treated well at others’ expense and got to see, and learn about, the Indian tech sector and the breathtaking city of Hyderabad. I posted pictures of the trip on Flickr, mentioning “Satyam” in the description, showed the pics to a few friends, and moved on with life.
Then came last week’s news. Here’s the graph of traffic to my flickr account: That spike represents several thousand people suddenly viewing my pictures of Satyam’s pristine campus.
When I think about the digital “trails” I leave behind—the flickr, facebook and twitter ephemera that define me by implication—there are some easy presumptions about what the future will hold. Evidence of raw emotions, the unmediated anger, romantic infatuation, depression or exhilaration that life sometimes holds, should generally be kept out of the record, since the social norms that govern public display of such phenomena are still evolving. While others in their twenties may consider such material normal, it reflects a life-in-the-fishbowl style of conduct that older people can find untoward, a style that would years ago have counted as exhibitionistic or otherwise misguided.
I would never, however, have guessed that a business trip to a corporate office park might one day be a prominent part of my online persona. In this case, I happen to be perfectly comfortable with the result—but that feels like luck. A seemingly innocuous trace I leave online, that later becomes salient, might just as easily prove problematic for me, or for someone else. There seems to be a larger lesson here: That anything we leave online could, for reasons we can’t guess at today, turn out to be important later. The inadvertent web—the set of seemingly trivial web content that exists today and will turn out to be important—may turn out to be a powerful force in favor of limiting what we put online.