November 21, 2024

Bilski and the Value of Experimentation

The Supreme Court’s long-awaited decision in Bilski v. Kappos brought closure to this particular patent prosecution, but not much clarity to the questions surrounding business method patents. The Court upheld the Federal Circuit’s conclusion that the claimed “procedure for instructing buyers and sellers how to protect against the risk of price fluctuations in a discrete section of the economy” was unpatentable, but threw out the “machine-or-transformation” test the lower court had used. In its place, the Court’s majority gave us a set of “clues” which future applicants, Sherlock Holmes-like, must use to discern the boundaries separating patentable processes from unpatentable “abstract ideas.”

The Court missed an opportunity to throw out “business method” patents, where a great many of these abstract ideas are currently claimed, and failed to address the abstraction of many software patents. Instead, Justice Kennedy’s majority seemed to go out of its way to avoid deciding even the questions presented, simultaneously appealing to the new technological demands of the “Information Age”

As numerous amicus briefs argue, the machine-or-transformation test would create uncertainty as to the patentability of software, advanced diagnostic medicine techniques, and inventions based on linear programming, data compression, and the manipulation of digital signals.

and yet re-ups the uncertainty on the same page:

It is important to emphasize that the Court today is not commenting on the patentability of any particular invention, let alone holding that any of the above-mentioned technologies from the Information Age should or should not receive patent protection.

The Court’s opinion dismisses the Federal Circuit’s brighter line test for “machine-or-transformation” in favor of hand-waving standards: a series of “clues,” “tools” and “guideposts” toward the unpatentable “abstract ideas.” While Kennedy notes that “This Age puts the possibility of innovation in the hands of more people,” his opinion leaves all of those people with new burdens of uncertainty — whether they seek patents or reject patent’s exclusivity but risk running into the patents of others. No wonder Justice Stevens, who concurs in the rejection of Bilski’s application but would have thrown business method patents out with it, calls the whole thing “less than pellucid.”

The one thing the meandering makes clear is that while the Supreme Court doesn’t like the Federal Circuit’s test (despite the Federal Circuit’s attempt to derive it from prior Supreme Court precedents), neither do the Supremes want to propose a new test of their own. The decision, like prior patent cases to reach the Supreme Court, points to larger structural problems: the lack of a diverse proving-ground for patent cases.

Since 1982, patent cases, unlike most other cases in our federal system, have all been appealed to one court, United States Court of Appeals for the Federal Circuit. Thus while copyright appeals, for example, are heard in the circuit court for the district in which they originate (one of twelve regional circuits), all patent appeals are funneled to the Federal Circuit. And while its judges may be persuaded by other circuits’ opinions, one circuit is not bound to follow its fellows, and may “split” on legal questions. Consolidation in the Federal Circuit deprives the Supreme Court of such “circuit splits” in patent law. At most, it may have dissents from the Federal Circuit’s panel or en banc decision. If it doesn’t like the test of the Federal Circuit, the Supreme Court has no other appellate court to which to turn.

Circuit splits are good for judicial decisionmaking. They permit experimentation and dialogue around difficult points of law. (The Supreme Court hears fewer than 5% of the cases appealed to it, but is twice as likely to take cases presenting inter-circuit splits.) Like the states in the federal system, multiple circuits provide a “laboratory [to] try novel social and economic experiments.” Diverse judges examining the same law, as presented in differing circumstances, can analyze it from different angles (and differing policy perspectives). The Supreme Court considering an issue ripened by the analysis of several courts is more likely to find a test it can support, less likely to have to craft one from scratch or abjure the task. At the cost of temporary non-uniformity, we may get empirical evidence toward better interpretation.

At a time when “harmonization” is pushed as justification for treaties(and a uniform ratcheting-up of intellectual property regimes), the Bilski opinion suggests again that uniformity is overrated, especially if it’s uniform murk.

Thoughts on juries for intellectual property lawsuits

Here’s a thought that’s been stuck in my head for the past few days. It would never be practical, but it’s an interesting idea to ponder. David Robinson tells me I’m not the first one to have this idea, either, but anyway…

Consider what happens in intellectual property lawsuits, particularly concerning infringement of patents or misappropriation of trade secrets. Ultimately, a jury is being asked to rule on essential questions like whether a product meets all the limitations of a patent’s claims, or whether a given trade secret was already known to the public. How does the jury reach a verdict? They’re presented with evidence and with testimony from experts for the plaintiff and experts for the defendant. The jurors then have to sort out whose arguments they find most persuasive. (Of course, a juror who doesn’t follow the technical details could well favor an expert who they find more personable, or better able to handle the pressure of a hostile cross-examination.)

One key issue in many patent cases is the interpretation of particular words in the patent. If they’re interpreted narrowly, then the accused product doesn’t infringe, because it doesn’t have the specific required feature. Conversely, if the claims are interpreted broadly enough for the accused product to infringe the patent, then the prior art to the patent might also land within the broader scope of the claims, thus rendering the patent invalid as either anticipated by or rendered obvious by the prior art. Even though the court will construe the claims in its Markman ruling, there’s often still plenty of room for argument. How, then, does the jury sort out the breadth of the terms of a patent? Again, they watch dueling experts, dueling attorneys, and so forth, and then reach their own conclusions.

What’s missing from this game is a person having ordinary skill in the art at the time of the invention (PHOSITA). One of the jobs of an expert is to interpret the claims of a patent from the perspective of a PHOSITA. Our hypothetical PHOSITA’s perspective is also essential to understanding how obvious a patent’s invention is relative to the prior art. The problem I want to discuss today is that in most cases, nobody on the jury is a PHOSITA or anywhere close. What would happen if they were?

With a hypothetically jury of PHOSITAs, they would be better equipped to read the patent themselves and directly answer questions that are presently left for experts to argue. Does this patent actually enable a PHOSITA to build the gadget (i.e., to “practice the invention”)? Would the patent in question be obvious given a description of the prior art at the time? Or, say in a trade secret case, is the accused secret something that’s actually well-known? With a PHOSITA jury, they could reason about these questions from their own perspective. Imagine, in a software-related case, being able to put source code in front of a jury and have them be able to read it independently. This idea effectively rethinks the concept of a jury of one’s peers. What if juries on technical cases were “peers” with the technology that’s on trial? It would completely change the game.

This idea would never fly for a variety of reasons. First and foremost, good luck finding enough people with the right skill sets and lacking any conflict of interest. Even if our court system had enough data on the citizenry to be able to identify suitable jury candidates (oh, the privacy concerns!), some courts’ jurisdictions simply don’t have enough citizens with the necessary skills and lack of conflicts. What would you do? Move the lawsuit to a different jurisdiction? How many parts of the country have a critical mass of engineers/scientists with the necessary skills? Furthermore, a lot of the wrangling in a lawsuit boils down to controlling what information is and is not presented to the jury. If the jury shows up with their own knowledge, they may reach their own conclusions based on that knowledge, and that’s something that many lawyers and courts would find undesirable because they couldn’t control it.

Related discussion shows up in a recent blog post by Julian Sanchez and a followup by Eric Rescorla. Sanchez’s thesis is that it’s much easier to make a scientific argument that sounds plausible, while being completely bogus, than it is to refute such a argument, because the refutation could well require building up an explanation of the relevant scientific background. He’s talking about climate change scientists vs. deniers or about biologists refuting “intelligent design” advocates, but the core of the argument is perfectly applicable here. A PHOSITA jury would have a better chance of seeing through bogus arguments and consequently they would be more likely to reach a sound verdict.

Obama's Digital Policy

The Iowa caucuses, less than a week away, will kick off the briefest and most intense series of presidential primaries in recent history. That makes it a good time to check in on what the candidates are saying about digital technologies. Between now and February 5th (the 23-state tsunami of primaries that may well resolve the major party nominations), we’ll be taking a look.

First up: Barack Obama. A quick glance at the sites of other candidates suggests that Obama is an outlier – none of the other major players has gone into anywhere near the level of detail that he has in their official campaign output. That may mean we’ll be tempted to spend a disproportionate amount of time talking about him – but if so, I guess that’s the benefit he reaps by paying attention. Michael Arrington’s TechCrunch tech primary provides the best summary I’ve found, compiled from other sources, of candidates’ positions on tech issues, and we may find ourselves relying on that over the next few weeks.

For Obama, we have a detailed “Technology and Innovation” white paper. It spans a topical area that Europeans often refer to as ICTs – information and communications technologies. That means basically anything digital, plus the analog ambit of the FCC (media concentration, universal service and so on). Along the way, other areas get passing mention – immigration of high tech workers, trade policy, energy efficiency.

Net neutrality may be the most talked about tech policy issue in Washington – it has generated a huge amount of constituent mail, perhaps as many as 600,000 constituent letters. Obama is clear on this: He says requiring ISPs to provide “accurate and honest information about service plans” that may violate neutrality is “not enough.” He wants a rule to stop network operators from charging “fees to privilege the content or applications of some web sites and Internet applications over others.” I think that full transparency about non-neutral Internet service may indeed be enough, an idea I first got from a comment on this blog, but in any case it’s nice to have a clear statement of view.

Where free speech collides with child protection, Obama faces the structural challenge, common to Democrats, of simultaneously appeasing both the entertainment industry and concerned moms. Predictably, he ends up engaging in a little wishful thinking:

On the Internet, Obama will require that parents have the option of receiving parental controls software that not only blocks objectionable Internet content but also prevents children from revealing personal information through their home computer.

The idealized version of such software, in which unwanted communications are stopped while desirable ones remain unfettered, is typically quite far from what the technology can actually provide. The software faces a design tradeoff between being too broad, in which case desirable use is stopped, and too narrow, in which case undesirable online activity is permitted. That might be why Internet filtering software, despite being available commercially, isn’t already ubiquitous. Given that parents can already buy it, Obama’s aim to “require that parents have the option of receiving” such software sounds like a proposal for the software to be subsidized or publicly funded; I doubt that would make it better.

On privacy, the Obama platform again reflects a structural problem. Voters seem eager for a President who will have greater concern for statutory law than the current incumbent does. But some of the secret and possibly illegal reductions of privacy that have gone on at the NSA and elsewhere may actually (in the judgment of those privy to the relevant secrets) be indispensable. So Obama, like many others, favors “updating surveillance laws.” He’ll follow the law, in other words, but first he wants it modified so that it can be followed without unduly tying his hands. That’s very likely the most reasonable kind of view a presidential candidate could have, but it doesn’t tell us how much privacy citizens will enjoy if he gets his way. The real question, unanswered in this platform, is exactly which updates Obama would favor. He himself is probably reserving judgment until, briefed by the intelligence community, he can competently decide what updates are needed.

My favorite part of the document, by far, is the section on government transparency. (I’d be remiss were I not to shamelessly plug the panel on exactly this topic at CITP’s upcoming January workshop.) The web is enabling amazing new levels, and even new kinds, of sunlight to accompany the exercise of public power. If you haven’t experienced MAPlight, which pairs campaign contribution data with legislators’ votes, then you should spend the next five minutes watching this video. Josh Tauberer, who launched Govtrack.us, has pointed out that one major impediment to making these tools even better is the reluctance of government bodies to adopt convenient formats for the data they publish. A plain text page (typical fare on existing government sites like THOMAS) meets the letter of the law, but an open format with rich metadata would see the same information put to more and better use.

Obama’s stated position is to make data available “online in universally accessible formats,” a clear nod in this direction. He also calls for live video feeds of government proceedings. One more radical proposal, camoflaged among these others, is

…pilot programs to open up government decision-making and involve the public in the work of agencies, not simply by soliciting opinions, but by tapping into the vast and distributed expertise of the American citizenry to help government make more informed decisions.

I’m not sure what that means, but it sounds exciting. If I wanted to start using wikis to make serious public policy decisions – and needed to make the idea sound simple and easy – that’s roughly how I might put it.

iPhone Unlocking Secret Revealed

The iPhone unlocking story took its next logical turn this week, with the release of a free iPhone unlocking program. Previously, unlocking required buying a commercial program or following a scary sequence of documented hardware and software tweaks.

How this happened is interesting in itself. (Caveat: This is based on the stories I’m hearing; I haven’t confirmed it all myself.) The biggest technical barrier to a software-only unlock procedure was figuring out the unlocking program, once installed on the iPhone, could modify the machine’s innermost configuration information – something that Apple’s iPhone operating system software was trying to prevent. A company called iPhoneSimFree figured out a way to do this, and used it to develop easy-to-use iPhone unlocking software, which they started selling.

Somebody bought a copy of the iPhoneSimFree software and reverse engineered it, to figure out how it could get at the iPhone’s internal configuration. The trick, once discovered, was easy to replicate, which eliminated the last remaining barrier to the development and release of free iPhone unlocking software.

It’s a commonplace in computer security that physical control over a device can almost always be leveraged to control it. (This iceberg has sunk many DRM Titanics.) This principle was the basis for iPhoneSimFree’s business model – helping users control their iPhones – but it boomeranged on them when a reverse engineer applied the same principle to iPhoneSimFree’s own product. Once the secret was out, anyone could make iPhone unlocking software, and the price of that software would inevitably be driven down to its marginal cost of zero.

Intellectual property law had little to offer iPhoneSimFree. The trick turned out to be a fact about how Apple’s software worked – not copyrightable by iPhoneSimFree, and not patentable in practice. Trade secret law didn’t help either, because trade secrets are not shielded against reverse engineering (for good reason). They could have attached a license agreement to their product, making customers promise not to reverse engineer their product, but that would not be effective either. And it might not have been the smartest thing to rely on, given that their own product was surely based on reverse engineering of the iPhone.

Now that the unlocking software is out, the ball is in Apple’s court. Will they try to cram the toothpaste back into the tube? Will they object publicly but accept that the iPhone unlocking battle is essentially over? Will they try to play another round, by modifying the iPhone software? Apple tends to be clever about these things, so their strategy, whatever it is, will have something to teach us.

Intellectual Property and Magicians

Jacob Loshin has an interesting draft paper on intellectual property among magicians. Stage magic is a form of technology, relying on both apparatus and technique to mislead the audience about what is really happening. As in any other technical field, innovations are valuable, and practitioners look for ways to cash in on their inventions. They do this, according to Loshin, without much use of intellectual property law.

This makes magic, like cuisine and clothing design, a thriving field that operates despite a lack of strong legal protection for innovation. Recently legal scholars have started looking harder at such fields, hoping to find mechanisms that can support innovation without the cost and complexity of conventional intellectual property law, and wondering how broadly those alternative mechanisms might be applied.

What makes magic unusual is that practitioners rarely rely on intellectual property law even though magic tricks are protectable by patent and as trade secrets. Patent protection should be obvious: patents cover novel mechanisms and methods, which most magic technologies are. Some classic tricks, such as the saw-a-person-in-half trick, have been patented. Trade secret protection should be obvious too: how to do a particular trick is valuable business information whose secrecy can be protected by the inventor. (The audience sees the trick done, but they don’t really see the secret of the trick.)

Yet Loshin, and apparently most magicians, think that patent and trade secret are a poor fit. There are basically three reasons for this. First, part of the value of a trick is that the audience can’t figure out how it’s done; but a patent must explain the details of the invention. Second, tricks are subject to “reverse engineering” by rival magicians who watch the trick done, repeatedly, from different parts of the audience, then do experiments to try to replicate it; and of course trade secrets are not protected against reverse engineering. Third, there’s a sort of guild mentality among magicians, holding that knowledge can be shared within the profession but must not be shared with the public. This guild mentality can’t easily be implemented within current law – a trade secret must be carefully protected, and so cannot be passed around casually within a loosely defined “community”.

The result is that the guild protects its secrets through social norms. You’re accepted into the guild by demonstrating technical prowess and following the guild’s norms over time; and you’ll be excommunicated if you violate the norms, for example by making a tell-all TV special about how popular tricks are done. (There’s an exception for casual magic tricks of the sort kids do.) The system operates informally but effectively.

As a policy guy, I have to ask whether this system is good for society as a whole. I can understand why those inside the profession would want to limit access to information – why help potential competitors? But does it really benefit society as a whole to have some unelected group deciding who gets access to certain kinds of information, and doing this outside the normal channels that (at least in principle) balance the interests of society against those of inventors? It’s not an easy question.

(To be clear, asking whether something is good or bad for society is not the same as asking whether government should regulate it. A case for regulation would require, at least, that the regulated behavior be bad for society and that there be a practically beneficial way for government to intervene.)

The best argument that magicians’ guild secrecy benefits the public is that tricks are more valuable to the public if the public doesn’t know how they are done. This is almost never the case for other technologies – knowing how your iPod works doesn’t make it less valuable to you – but it just might be true for magic, given that it exists for entertainment and you might enjoy it more if you don’t know how it’s done.

But I have my doubts that publishing information about tricks actually makes them less entertaining. Goldin’s patent on the saw-a-person-in-half trick – which explains pretty clearly how to do the trick – was issued in 1923, but the trick is still a staple today. In theory, anybody can read Goldin’s patent whenever they want; but in practice hardly anybody has read it, and we all enjoy the trick despite suspecting how it’s probably done. And do we really need to read Gaughan’s patent to know how a “levitating” magician stays up in the air? Gaughan’s cleverness is all about how to keep the audience from seeing the evidence of how it’s done.

One effect of the guild’s secrecy is that the public rarely learns who the great innovators are. We know who puts on a good show, but we rarely know who invented the tricks. The great innovators may be venerated within the profession, but they’re unknown to the public. One has to wonder whether the field would move faster, and be more innovative and entertaining, if it were more open.