January 19, 2017

Bilski and the Value of Experimentation

The Supreme Court’s long-awaited decision in Bilski v. Kappos brought closure to this particular patent prosecution, but not much clarity to the questions surrounding business method patents. The Court upheld the Federal Circuit’s conclusion that the claimed “procedure for instructing buyers and sellers how to protect against the risk of price fluctuations in a discrete section of the economy” was unpatentable, but threw out the “machine-or-transformation” test the lower court had used. In its place, the Court’s majority gave us a set of “clues” which future applicants, Sherlock Holmes-like, must use to discern the boundaries separating patentable processes from unpatentable “abstract ideas.”

The Court missed an opportunity to throw out “business method” patents, where a great many of these abstract ideas are currently claimed, and failed to address the abstraction of many software patents. Instead, Justice Kennedy’s majority seemed to go out of its way to avoid deciding even the questions presented, simultaneously appealing to the new technological demands of the “Information Age”

As numerous amicus briefs argue, the machine-or-transformation test would create uncertainty as to the patentability of software, advanced diagnostic medicine techniques, and inventions based on linear programming, data compression, and the manipulation of digital signals.

and yet re-ups the uncertainty on the same page:

It is important to emphasize that the Court today is not commenting on the patentability of any particular invention, let alone holding that any of the above-mentioned technologies from the Information Age should or should not receive patent protection.

The Court’s opinion dismisses the Federal Circuit’s brighter line test for “machine-or-transformation” in favor of hand-waving standards: a series of “clues,” “tools” and “guideposts” toward the unpatentable “abstract ideas.” While Kennedy notes that “This Age puts the possibility of innovation in the hands of more people,” his opinion leaves all of those people with new burdens of uncertainty — whether they seek patents or reject patent’s exclusivity but risk running into the patents of others. No wonder Justice Stevens, who concurs in the rejection of Bilski’s application but would have thrown business method patents out with it, calls the whole thing “less than pellucid.”

The one thing the meandering makes clear is that while the Supreme Court doesn’t like the Federal Circuit’s test (despite the Federal Circuit’s attempt to derive it from prior Supreme Court precedents), neither do the Supremes want to propose a new test of their own. The decision, like prior patent cases to reach the Supreme Court, points to larger structural problems: the lack of a diverse proving-ground for patent cases.

Since 1982, patent cases, unlike most other cases in our federal system, have all been appealed to one court, United States Court of Appeals for the Federal Circuit. Thus while copyright appeals, for example, are heard in the circuit court for the district in which they originate (one of twelve regional circuits), all patent appeals are funneled to the Federal Circuit. And while its judges may be persuaded by other circuits’ opinions, one circuit is not bound to follow its fellows, and may “split” on legal questions. Consolidation in the Federal Circuit deprives the Supreme Court of such “circuit splits” in patent law. At most, it may have dissents from the Federal Circuit’s panel or en banc decision. If it doesn’t like the test of the Federal Circuit, the Supreme Court has no other appellate court to which to turn.

Circuit splits are good for judicial decisionmaking. They permit experimentation and dialogue around difficult points of law. (The Supreme Court hears fewer than 5% of the cases appealed to it, but is twice as likely to take cases presenting inter-circuit splits.) Like the states in the federal system, multiple circuits provide a “laboratory [to] try novel social and economic experiments.” Diverse judges examining the same law, as presented in differing circumstances, can analyze it from different angles (and differing policy perspectives). The Supreme Court considering an issue ripened by the analysis of several courts is more likely to find a test it can support, less likely to have to craft one from scratch or abjure the task. At the cost of temporary non-uniformity, we may get empirical evidence toward better interpretation.

At a time when “harmonization” is pushed as justification for treaties(and a uniform ratcheting-up of intellectual property regimes), the Bilski opinion suggests again that uniformity is overrated, especially if it’s uniform murk.

Thoughts on juries for intellectual property lawsuits

Here’s a thought that’s been stuck in my head for the past few days. It would never be practical, but it’s an interesting idea to ponder. David Robinson tells me I’m not the first one to have this idea, either, but anyway…

Consider what happens in intellectual property lawsuits, particularly concerning infringement of patents or misappropriation of trade secrets. Ultimately, a jury is being asked to rule on essential questions like whether a product meets all the limitations of a patent’s claims, or whether a given trade secret was already known to the public. How does the jury reach a verdict? They’re presented with evidence and with testimony from experts for the plaintiff and experts for the defendant. The jurors then have to sort out whose arguments they find most persuasive. (Of course, a juror who doesn’t follow the technical details could well favor an expert who they find more personable, or better able to handle the pressure of a hostile cross-examination.)

One key issue in many patent cases is the interpretation of particular words in the patent. If they’re interpreted narrowly, then the accused product doesn’t infringe, because it doesn’t have the specific required feature. Conversely, if the claims are interpreted broadly enough for the accused product to infringe the patent, then the prior art to the patent might also land within the broader scope of the claims, thus rendering the patent invalid as either anticipated by or rendered obvious by the prior art. Even though the court will construe the claims in its Markman ruling, there’s often still plenty of room for argument. How, then, does the jury sort out the breadth of the terms of a patent? Again, they watch dueling experts, dueling attorneys, and so forth, and then reach their own conclusions.

What’s missing from this game is a person having ordinary skill in the art at the time of the invention (PHOSITA). One of the jobs of an expert is to interpret the claims of a patent from the perspective of a PHOSITA. Our hypothetical PHOSITA’s perspective is also essential to understanding how obvious a patent’s invention is relative to the prior art. The problem I want to discuss today is that in most cases, nobody on the jury is a PHOSITA or anywhere close. What would happen if they were?

With a hypothetically jury of PHOSITAs, they would be better equipped to read the patent themselves and directly answer questions that are presently left for experts to argue. Does this patent actually enable a PHOSITA to build the gadget (i.e., to “practice the invention”)? Would the patent in question be obvious given a description of the prior art at the time? Or, say in a trade secret case, is the accused secret something that’s actually well-known? With a PHOSITA jury, they could reason about these questions from their own perspective. Imagine, in a software-related case, being able to put source code in front of a jury and have them be able to read it independently. This idea effectively rethinks the concept of a jury of one’s peers. What if juries on technical cases were “peers” with the technology that’s on trial? It would completely change the game.

This idea would never fly for a variety of reasons. First and foremost, good luck finding enough people with the right skill sets and lacking any conflict of interest. Even if our court system had enough data on the citizenry to be able to identify suitable jury candidates (oh, the privacy concerns!), some courts’ jurisdictions simply don’t have enough citizens with the necessary skills and lack of conflicts. What would you do? Move the lawsuit to a different jurisdiction? How many parts of the country have a critical mass of engineers/scientists with the necessary skills? Furthermore, a lot of the wrangling in a lawsuit boils down to controlling what information is and is not presented to the jury. If the jury shows up with their own knowledge, they may reach their own conclusions based on that knowledge, and that’s something that many lawyers and courts would find undesirable because they couldn’t control it.

Related discussion shows up in a recent blog post by Julian Sanchez and a followup by Eric Rescorla. Sanchez’s thesis is that it’s much easier to make a scientific argument that sounds plausible, while being completely bogus, than it is to refute such a argument, because the refutation could well require building up an explanation of the relevant scientific background. He’s talking about climate change scientists vs. deniers or about biologists refuting “intelligent design” advocates, but the core of the argument is perfectly applicable here. A PHOSITA jury would have a better chance of seeing through bogus arguments and consequently they would be more likely to reach a sound verdict.

Obama's Digital Policy

The Iowa caucuses, less than a week away, will kick off the briefest and most intense series of presidential primaries in recent history. That makes it a good time to check in on what the candidates are saying about digital technologies. Between now and February 5th (the 23-state tsunami of primaries that may well resolve the major party nominations), we’ll be taking a look.

First up: Barack Obama. A quick glance at the sites of other candidates suggests that Obama is an outlier – none of the other major players has gone into anywhere near the level of detail that he has in their official campaign output. That may mean we’ll be tempted to spend a disproportionate amount of time talking about him – but if so, I guess that’s the benefit he reaps by paying attention. Michael Arrington’s TechCrunch tech primary provides the best summary I’ve found, compiled from other sources, of candidates’ positions on tech issues, and we may find ourselves relying on that over the next few weeks.

For Obama, we have a detailed “Technology and Innovation” white paper. It spans a topical area that Europeans often refer to as ICTs – information and communications technologies. That means basically anything digital, plus the analog ambit of the FCC (media concentration, universal service and so on). Along the way, other areas get passing mention – immigration of high tech workers, trade policy, energy efficiency.

Net neutrality may be the most talked about tech policy issue in Washington – it has generated a huge amount of constituent mail, perhaps as many as 600,000 constituent letters. Obama is clear on this: He says requiring ISPs to provide “accurate and honest information about service plans” that may violate neutrality is “not enough.” He wants a rule to stop network operators from charging “fees to privilege the content or applications of some web sites and Internet applications over others.” I think that full transparency about non-neutral Internet service may indeed be enough, an idea I first got from a comment on this blog, but in any case it’s nice to have a clear statement of view.

Where free speech collides with child protection, Obama faces the structural challenge, common to Democrats, of simultaneously appeasing both the entertainment industry and concerned moms. Predictably, he ends up engaging in a little wishful thinking:

On the Internet, Obama will require that parents have the option of receiving parental controls software that not only blocks objectionable Internet content but also prevents children from revealing personal information through their home computer.

The idealized version of such software, in which unwanted communications are stopped while desirable ones remain unfettered, is typically quite far from what the technology can actually provide. The software faces a design tradeoff between being too broad, in which case desirable use is stopped, and too narrow, in which case undesirable online activity is permitted. That might be why Internet filtering software, despite being available commercially, isn’t already ubiquitous. Given that parents can already buy it, Obama’s aim to “require that parents have the option of receiving” such software sounds like a proposal for the software to be subsidized or publicly funded; I doubt that would make it better.

On privacy, the Obama platform again reflects a structural problem. Voters seem eager for a President who will have greater concern for statutory law than the current incumbent does. But some of the secret and possibly illegal reductions of privacy that have gone on at the NSA and elsewhere may actually (in the judgment of those privy to the relevant secrets) be indispensable. So Obama, like many others, favors “updating surveillance laws.” He’ll follow the law, in other words, but first he wants it modified so that it can be followed without unduly tying his hands. That’s very likely the most reasonable kind of view a presidential candidate could have, but it doesn’t tell us how much privacy citizens will enjoy if he gets his way. The real question, unanswered in this platform, is exactly which updates Obama would favor. He himself is probably reserving judgment until, briefed by the intelligence community, he can competently decide what updates are needed.

My favorite part of the document, by far, is the section on government transparency. (I’d be remiss were I not to shamelessly plug the panel on exactly this topic at CITP’s upcoming January workshop.) The web is enabling amazing new levels, and even new kinds, of sunlight to accompany the exercise of public power. If you haven’t experienced MAPlight, which pairs campaign contribution data with legislators’ votes, then you should spend the next five minutes watching this video. Josh Tauberer, who launched Govtrack.us, has pointed out that one major impediment to making these tools even better is the reluctance of government bodies to adopt convenient formats for the data they publish. A plain text page (typical fare on existing government sites like THOMAS) meets the letter of the law, but an open format with rich metadata would see the same information put to more and better use.

Obama’s stated position is to make data available “online in universally accessible formats,” a clear nod in this direction. He also calls for live video feeds of government proceedings. One more radical proposal, camoflaged among these others, is

…pilot programs to open up government decision-making and involve the public in the work of agencies, not simply by soliciting opinions, but by tapping into the vast and distributed expertise of the American citizenry to help government make more informed decisions.

I’m not sure what that means, but it sounds exciting. If I wanted to start using wikis to make serious public policy decisions – and needed to make the idea sound simple and easy – that’s roughly how I might put it.