April 17, 2014

avatar

Bilski and the Value of Experimentation

The Supreme Court’s long-awaited decision in Bilski v. Kappos brought closure to this particular patent prosecution, but not much clarity to the questions surrounding business method patents. The Court upheld the Federal Circuit’s conclusion that the claimed “procedure for instructing buyers and sellers how to protect against the risk of price fluctuations in a discrete section of the economy” was unpatentable, but threw out the “machine-or-transformation” test the lower court had used. In its place, the Court’s majority gave us a set of “clues” which future applicants, Sherlock Holmes-like, must use to discern the boundaries separating patentable processes from unpatentable “abstract ideas.”

The Court missed an opportunity to throw out “business method” patents, where a great many of these abstract ideas are currently claimed, and failed to address the abstraction of many software patents. Instead, Justice Kennedy’s majority seemed to go out of its way to avoid deciding even the questions presented, simultaneously appealing to the new technological demands of the “Information Age”

As numerous amicus briefs argue, the machine-or-transformation test would create uncertainty as to the patentability of software, advanced diagnostic medicine techniques, and inventions based on linear programming, data compression, and the manipulation of digital signals.

and yet re-ups the uncertainty on the same page:

It is important to emphasize that the Court today is not commenting on the patentability of any particular invention, let alone holding that any of the above-mentioned technologies from the Information Age should or should not receive patent protection.

The Court’s opinion dismisses the Federal Circuit’s brighter line test for “machine-or-transformation” in favor of hand-waving standards: a series of “clues,” “tools” and “guideposts” toward the unpatentable “abstract ideas.” While Kennedy notes that “This Age puts the possibility of innovation in the hands of more people,” his opinion leaves all of those people with new burdens of uncertainty — whether they seek patents or reject patent’s exclusivity but risk running into the patents of others. No wonder Justice Stevens, who concurs in the rejection of Bilski’s application but would have thrown business method patents out with it, calls the whole thing “less than pellucid.”

The one thing the meandering makes clear is that while the Supreme Court doesn’t like the Federal Circuit’s test (despite the Federal Circuit’s attempt to derive it from prior Supreme Court precedents), neither do the Supremes want to propose a new test of their own. The decision, like prior patent cases to reach the Supreme Court, points to larger structural problems: the lack of a diverse proving-ground for patent cases.

Since 1982, patent cases, unlike most other cases in our federal system, have all been appealed to one court, United States Court of Appeals for the Federal Circuit. Thus while copyright appeals, for example, are heard in the circuit court for the district in which they originate (one of twelve regional circuits), all patent appeals are funneled to the Federal Circuit. And while its judges may be persuaded by other circuits’ opinions, one circuit is not bound to follow its fellows, and may “split” on legal questions. Consolidation in the Federal Circuit deprives the Supreme Court of such “circuit splits” in patent law. At most, it may have dissents from the Federal Circuit’s panel or en banc decision. If it doesn’t like the test of the Federal Circuit, the Supreme Court has no other appellate court to which to turn.

Circuit splits are good for judicial decisionmaking. They permit experimentation and dialogue around difficult points of law. (The Supreme Court hears fewer than 5% of the cases appealed to it, but is twice as likely to take cases presenting inter-circuit splits.) Like the states in the federal system, multiple circuits provide a “laboratory [to] try novel social and economic experiments.” Diverse judges examining the same law, as presented in differing circumstances, can analyze it from different angles (and differing policy perspectives). The Supreme Court considering an issue ripened by the analysis of several courts is more likely to find a test it can support, less likely to have to craft one from scratch or abjure the task. At the cost of temporary non-uniformity, we may get empirical evidence toward better interpretation.

At a time when “harmonization” is pushed as justification for treaties(and a uniform ratcheting-up of intellectual property regimes), the Bilski opinion suggests again that uniformity is overrated, especially if it’s uniform murk.

avatar

Thoughts on juries for intellectual property lawsuits

Here’s a thought that’s been stuck in my head for the past few days. It would never be practical, but it’s an interesting idea to ponder. David Robinson tells me I’m not the first one to have this idea, either, but anyway…

Consider what happens in intellectual property lawsuits, particularly concerning infringement of patents or misappropriation of trade secrets. Ultimately, a jury is being asked to rule on essential questions like whether a product meets all the limitations of a patent’s claims, or whether a given trade secret was already known to the public. How does the jury reach a verdict? They’re presented with evidence and with testimony from experts for the plaintiff and experts for the defendant. The jurors then have to sort out whose arguments they find most persuasive. (Of course, a juror who doesn’t follow the technical details could well favor an expert who they find more personable, or better able to handle the pressure of a hostile cross-examination.)

One key issue in many patent cases is the interpretation of particular words in the patent. If they’re interpreted narrowly, then the accused product doesn’t infringe, because it doesn’t have the specific required feature. Conversely, if the claims are interpreted broadly enough for the accused product to infringe the patent, then the prior art to the patent might also land within the broader scope of the claims, thus rendering the patent invalid as either anticipated by or rendered obvious by the prior art. Even though the court will construe the claims in its Markman ruling, there’s often still plenty of room for argument. How, then, does the jury sort out the breadth of the terms of a patent? Again, they watch dueling experts, dueling attorneys, and so forth, and then reach their own conclusions.

What’s missing from this game is a person having ordinary skill in the art at the time of the invention (PHOSITA). One of the jobs of an expert is to interpret the claims of a patent from the perspective of a PHOSITA. Our hypothetical PHOSITA’s perspective is also essential to understanding how obvious a patent’s invention is relative to the prior art. The problem I want to discuss today is that in most cases, nobody on the jury is a PHOSITA or anywhere close. What would happen if they were?

With a hypothetically jury of PHOSITAs, they would be better equipped to read the patent themselves and directly answer questions that are presently left for experts to argue. Does this patent actually enable a PHOSITA to build the gadget (i.e., to “practice the invention”)? Would the patent in question be obvious given a description of the prior art at the time? Or, say in a trade secret case, is the accused secret something that’s actually well-known? With a PHOSITA jury, they could reason about these questions from their own perspective. Imagine, in a software-related case, being able to put source code in front of a jury and have them be able to read it independently. This idea effectively rethinks the concept of a jury of one’s peers. What if juries on technical cases were “peers” with the technology that’s on trial? It would completely change the game.

This idea would never fly for a variety of reasons. First and foremost, good luck finding enough people with the right skill sets and lacking any conflict of interest. Even if our court system had enough data on the citizenry to be able to identify suitable jury candidates (oh, the privacy concerns!), some courts’ jurisdictions simply don’t have enough citizens with the necessary skills and lack of conflicts. What would you do? Move the lawsuit to a different jurisdiction? How many parts of the country have a critical mass of engineers/scientists with the necessary skills? Furthermore, a lot of the wrangling in a lawsuit boils down to controlling what information is and is not presented to the jury. If the jury shows up with their own knowledge, they may reach their own conclusions based on that knowledge, and that’s something that many lawyers and courts would find undesirable because they couldn’t control it.

Related discussion shows up in a recent blog post by Julian Sanchez and a followup by Eric Rescorla. Sanchez’s thesis is that it’s much easier to make a scientific argument that sounds plausible, while being completely bogus, than it is to refute such a argument, because the refutation could well require building up an explanation of the relevant scientific background. He’s talking about climate change scientists vs. deniers or about biologists refuting “intelligent design” advocates, but the core of the argument is perfectly applicable here. A PHOSITA jury would have a better chance of seeing through bogus arguments and consequently they would be more likely to reach a sound verdict.

avatar

Obama's Digital Policy

The Iowa caucuses, less than a week away, will kick off the briefest and most intense series of presidential primaries in recent history. That makes it a good time to check in on what the candidates are saying about digital technologies. Between now and February 5th (the 23-state tsunami of primaries that may well resolve the major party nominations), we’ll be taking a look.

First up: Barack Obama. A quick glance at the sites of other candidates suggests that Obama is an outlier – none of the other major players has gone into anywhere near the level of detail that he has in their official campaign output. That may mean we’ll be tempted to spend a disproportionate amount of time talking about him – but if so, I guess that’s the benefit he reaps by paying attention. Michael Arrington’s TechCrunch tech primary provides the best summary I’ve found, compiled from other sources, of candidates’ positions on tech issues, and we may find ourselves relying on that over the next few weeks.

For Obama, we have a detailed “Technology and Innovation” white paper. It spans a topical area that Europeans often refer to as ICTs – information and communications technologies. That means basically anything digital, plus the analog ambit of the FCC (media concentration, universal service and so on). Along the way, other areas get passing mention – immigration of high tech workers, trade policy, energy efficiency.

Net neutrality may be the most talked about tech policy issue in Washington – it has generated a huge amount of constituent mail, perhaps as many as 600,000 constituent letters. Obama is clear on this: He says requiring ISPs to provide “accurate and honest information about service plans” that may violate neutrality is “not enough.” He wants a rule to stop network operators from charging “fees to privilege the content or applications of some web sites and Internet applications over others.” I think that full transparency about non-neutral Internet service may indeed be enough, an idea I first got from a comment on this blog, but in any case it’s nice to have a clear statement of view.

Where free speech collides with child protection, Obama faces the structural challenge, common to Democrats, of simultaneously appeasing both the entertainment industry and concerned moms. Predictably, he ends up engaging in a little wishful thinking:

On the Internet, Obama will require that parents have the option of receiving parental controls software that not only blocks objectionable Internet content but also prevents children from revealing personal information through their home computer.

The idealized version of such software, in which unwanted communications are stopped while desirable ones remain unfettered, is typically quite far from what the technology can actually provide. The software faces a design tradeoff between being too broad, in which case desirable use is stopped, and too narrow, in which case undesirable online activity is permitted. That might be why Internet filtering software, despite being available commercially, isn’t already ubiquitous. Given that parents can already buy it, Obama’s aim to “require that parents have the option of receiving” such software sounds like a proposal for the software to be subsidized or publicly funded; I doubt that would make it better.

On privacy, the Obama platform again reflects a structural problem. Voters seem eager for a President who will have greater concern for statutory law than the current incumbent does. But some of the secret and possibly illegal reductions of privacy that have gone on at the NSA and elsewhere may actually (in the judgment of those privy to the relevant secrets) be indispensable. So Obama, like many others, favors “updating surveillance laws.” He’ll follow the law, in other words, but first he wants it modified so that it can be followed without unduly tying his hands. That’s very likely the most reasonable kind of view a presidential candidate could have, but it doesn’t tell us how much privacy citizens will enjoy if he gets his way. The real question, unanswered in this platform, is exactly which updates Obama would favor. He himself is probably reserving judgment until, briefed by the intelligence community, he can competently decide what updates are needed.

My favorite part of the document, by far, is the section on government transparency. (I’d be remiss were I not to shamelessly plug the panel on exactly this topic at CITP’s upcoming January workshop.) The web is enabling amazing new levels, and even new kinds, of sunlight to accompany the exercise of public power. If you haven’t experienced MAPlight, which pairs campaign contribution data with legislators’ votes, then you should spend the next five minutes watching this video. Josh Tauberer, who launched Govtrack.us, has pointed out that one major impediment to making these tools even better is the reluctance of government bodies to adopt convenient formats for the data they publish. A plain text page (typical fare on existing government sites like THOMAS) meets the letter of the law, but an open format with rich metadata would see the same information put to more and better use.

Obama’s stated position is to make data available “online in universally accessible formats,” a clear nod in this direction. He also calls for live video feeds of government proceedings. One more radical proposal, camoflaged among these others, is

…pilot programs to open up government decision-making and involve the public in the work of agencies, not simply by soliciting opinions, but by tapping into the vast and distributed expertise of the American citizenry to help government make more informed decisions.

I’m not sure what that means, but it sounds exciting. If I wanted to start using wikis to make serious public policy decisions – and needed to make the idea sound simple and easy – that’s roughly how I might put it.

avatar

iPhone Unlocking Secret Revealed

The iPhone unlocking story took its next logical turn this week, with the release of a free iPhone unlocking program. Previously, unlocking required buying a commercial program or following a scary sequence of documented hardware and software tweaks.

How this happened is interesting in itself. (Caveat: This is based on the stories I’m hearing; I haven’t confirmed it all myself.) The biggest technical barrier to a software-only unlock procedure was figuring out the unlocking program, once installed on the iPhone, could modify the machine’s innermost configuration information – something that Apple’s iPhone operating system software was trying to prevent. A company called iPhoneSimFree figured out a way to do this, and used it to develop easy-to-use iPhone unlocking software, which they started selling.

Somebody bought a copy of the iPhoneSimFree software and reverse engineered it, to figure out how it could get at the iPhone’s internal configuration. The trick, once discovered, was easy to replicate, which eliminated the last remaining barrier to the development and release of free iPhone unlocking software.

It’s a commonplace in computer security that physical control over a device can almost always be leveraged to control it. (This iceberg has sunk many DRM Titanics.) This principle was the basis for iPhoneSimFree’s business model – helping users control their iPhones – but it boomeranged on them when a reverse engineer applied the same principle to iPhoneSimFree’s own product. Once the secret was out, anyone could make iPhone unlocking software, and the price of that software would inevitably be driven down to its marginal cost of zero.

Intellectual property law had little to offer iPhoneSimFree. The trick turned out to be a fact about how Apple’s software worked – not copyrightable by iPhoneSimFree, and not patentable in practice. Trade secret law didn’t help either, because trade secrets are not shielded against reverse engineering (for good reason). They could have attached a license agreement to their product, making customers promise not to reverse engineer their product, but that would not be effective either. And it might not have been the smartest thing to rely on, given that their own product was surely based on reverse engineering of the iPhone.

Now that the unlocking software is out, the ball is in Apple’s court. Will they try to cram the toothpaste back into the tube? Will they object publicly but accept that the iPhone unlocking battle is essentially over? Will they try to play another round, by modifying the iPhone software? Apple tends to be clever about these things, so their strategy, whatever it is, will have something to teach us.

avatar

Intellectual Property and Magicians

Jacob Loshin has an interesting draft paper on intellectual property among magicians. Stage magic is a form of technology, relying on both apparatus and technique to mislead the audience about what is really happening. As in any other technical field, innovations are valuable, and practitioners look for ways to cash in on their inventions. They do this, according to Loshin, without much use of intellectual property law.

This makes magic, like cuisine and clothing design, a thriving field that operates despite a lack of strong legal protection for innovation. Recently legal scholars have started looking harder at such fields, hoping to find mechanisms that can support innovation without the cost and complexity of conventional intellectual property law, and wondering how broadly those alternative mechanisms might be applied.

What makes magic unusual is that practitioners rarely rely on intellectual property law even though magic tricks are protectable by patent and as trade secrets. Patent protection should be obvious: patents cover novel mechanisms and methods, which most magic technologies are. Some classic tricks, such as the saw-a-person-in-half trick, have been patented. Trade secret protection should be obvious too: how to do a particular trick is valuable business information whose secrecy can be protected by the inventor. (The audience sees the trick done, but they don’t really see the secret of the trick.)

Yet Loshin, and apparently most magicians, think that patent and trade secret are a poor fit. There are basically three reasons for this. First, part of the value of a trick is that the audience can’t figure out how it’s done; but a patent must explain the details of the invention. Second, tricks are subject to “reverse engineering” by rival magicians who watch the trick done, repeatedly, from different parts of the audience, then do experiments to try to replicate it; and of course trade secrets are not protected against reverse engineering. Third, there’s a sort of guild mentality among magicians, holding that knowledge can be shared within the profession but must not be shared with the public. This guild mentality can’t easily be implemented within current law – a trade secret must be carefully protected, and so cannot be passed around casually within a loosely defined “community”.

The result is that the guild protects its secrets through social norms. You’re accepted into the guild by demonstrating technical prowess and following the guild’s norms over time; and you’ll be excommunicated if you violate the norms, for example by making a tell-all TV special about how popular tricks are done. (There’s an exception for casual magic tricks of the sort kids do.) The system operates informally but effectively.

As a policy guy, I have to ask whether this system is good for society as a whole. I can understand why those inside the profession would want to limit access to information – why help potential competitors? But does it really benefit society as a whole to have some unelected group deciding who gets access to certain kinds of information, and doing this outside the normal channels that (at least in principle) balance the interests of society against those of inventors? It’s not an easy question.

(To be clear, asking whether something is good or bad for society is not the same as asking whether government should regulate it. A case for regulation would require, at least, that the regulated behavior be bad for society and that there be a practically beneficial way for government to intervene.)

The best argument that magicians’ guild secrecy benefits the public is that tricks are more valuable to the public if the public doesn’t know how they are done. This is almost never the case for other technologies – knowing how your iPod works doesn’t make it less valuable to you – but it just might be true for magic, given that it exists for entertainment and you might enjoy it more if you don’t know how it’s done.

But I have my doubts that publishing information about tricks actually makes them less entertaining. Goldin’s patent on the saw-a-person-in-half trick – which explains pretty clearly how to do the trick – was issued in 1923, but the trick is still a staple today. In theory, anybody can read Goldin’s patent whenever they want; but in practice hardly anybody has read it, and we all enjoy the trick despite suspecting how it’s probably done. And do we really need to read Gaughan’s patent to know how a “levitating” magician stays up in the air? Gaughan’s cleverness is all about how to keep the audience from seeing the evidence of how it’s done.

One effect of the guild’s secrecy is that the public rarely learns who the great innovators are. We know who puts on a good show, but we rarely know who invented the tricks. The great innovators may be venerated within the profession, but they’re unknown to the public. One has to wonder whether the field would move faster, and be more innovative and entertaining, if it were more open.

avatar

DRM for Chargers: Possibly Good for Users

Apple has filed a patent application on a technology for tethering rechargeable devices (like iPods) to particular chargers. The idea is that the device will only allow its batteries to be recharged if it is connected to an authorized charger.

Whether this is good for consumers depends on how a device comes to be authorized. If “authorized” just means “sold or licensed by Apple” then consumers won’t benefit – the only effect will be to give Apple control of the aftermarket for replacement chargers.

But if the iPod’s owner decides which chargers are authorized, then this might be a useful anti-theft measure – there’s little point in stealing an iPod if you won’t be able to recharge it.

How might this work? One possibility is that when the device is plugged in to a charger it hasn’t seen before, it makes a noise and prompts the user to enter a password on the iPod’s screen. If the correct password is entered, the device will allow itself to be recharged by that charger in the future. The device will become associated with a group of chargers over time.

Another possibility, mentioned in the patent, is that there could be a central registry of stolen iPods. When you synched your iPod with your computer, the computer would get a digitally signed statement from the registry, saying that your iPod was not listed as stolen. The computer would pass that signed statement on to the iPod. If the iPod went too long without seeing such a statement, it would demand that the user do a synch, or enter a password, before it would allow itself to be recharged.

How can we tell whether a DRM scheme like this is good for users. One sure-fire test is whether the user has the option of turning the scheme off. You don’t want a thief to be able to disable the scheme on a stolen iPod, but it’s safe to let the user disable the anti-theft feature the first time she syncs her new iPod, or later by entering a password.

We don’t know yet whether Apple will do this. But reading the patent, it looks to me like Apple has thought carefully about the legitimate anti-theft uses of this technology. That’s a good sign.

avatar

All the Interested Parties? Not Quite.

Here’s a quick quiz to detect whether you’re stuck in Washington groupthink.

There’s a patent reform bill under consideration in Congress. According to a blog entry by Andrew Noyes at the National Journal, a group of Republican senators sent a letter to Rep. Howard Berman, the chair of the relevant House subcommittee, asking that the patent bill be given more consideration before the committee votes on it. Senator Berman responded:

“There have been a number of hearings, briefings, and meetings about these issues over the past four years,” said Berman, who introduced a companion bill, H.R.1908. “We’ve heard from representatives of all the interested parties – from independent inventors, universities, bio-technology, pharmaceutical, software and financial services industries.”

Here’s the quiz: who did Rep. Berman leave off his list of “all the interested parties”?

Rep. Berman’s omission is a common one in Washington. Start listening for this omission, and you’ll be surprised how often you hear it.

I don’t mean to pick on Rep. Berman personally. Okay, maybe I do, just a tiny bit, given some of his past actions such as co-sponsoring the ill-advised Berman-Coble bill that would have legalized denial-of-service attacks against people suspected of sharing infringing content. If this was just one congressman, once, it wouldn’t be worth noting. But given the frequency of this mistake, I think it does reveal something about the standard Washington mindset.

In the case of patent reform, there are complex issues at stake. Changes to patent law can affect innovation and competition in subtle ways. That affects all of the parties Rep. Berman mentioned, as well as the one notable group he left out. Which is …

Ordinary citizens.

avatar

Princeton-Microsoft IP Conference Liveblog

Today I’m at the Princeton-Microsoft Intellectual Property Conference. I’ll be blogging some of the panels as they occur. There are parallel sessions, and I’m on one panel, so I can’t cover everything.

The first panel is on “Organizing the Public Interest”. Panelists are Yochai Benkler, David Einhorn, Margaret Hedstrom, Larry Lessig, and Gigi Sohn. The moderator is Paul Starr.

Yochai Benker (Yale Law) speaks first. He has two themes: decentralization of creation, and emergence of a political movement around that creation. Possibility of altering the politics in three ways. First, the changing relationship between creators and users and growth in the number of creators changes how people relate to the rules. Second, we see existence proofs of the possible success of decentralized production: Linux, Skype, Flickr, Wikipedia. Third, a shift away from centralized, mass, broadcast media. He talks about political movements like free culture, Internet freedom, etc. He says these movements are coalescing and allying with each other and with other powers such as companies or nations. He is skeptical of the direct value of public reason/persuasion. He thinks instead that changing social practices will have a bigger impact in the long run.

David Einhorn (Counsel for the Jackson Laboratory, a research institution) speaks second. “I’m here to talk about mice.” Jackson Lab has lots of laboratory mice – the largest collection (community? inventory?) in the world. Fights developed around access to certain strains of mice. Gene sequences created in the lab are patentable, and research institutions are allowed to exploit those patents (even if the university was government-funded). This has led to some problems. There is an inherent tension between patent exploitation and other goals of universities (creation and open dissemination of knowledge). Lines of lab mice were patentable, and suddenly lawyers were involved whenever researchers used to get mice. It sounds to me like Jackson Lab is a kind of creative commons for mice. He tells stories about how patent negotiations have blocked some nonprofit research efforts.

Margaret Hedstrom (Univ. of Michigan) speaks third. She talks about the impact of IP law on libraries and archives, and how those communities have organized themselves. In the digital world, there has been a shift from buying copies of materials, to licensing materials – a shift from the default copyright rules to the rules that are in the license. This means, for instance, that libraries may not be able to lend out material, or may not be able to make archival copies. Some special provisions in the law apply to libraries and archives, but not to everybody who does archiving (e.g., the Internet Archive is in the gray area). The orphan works problem is a big deal for libraries and archives, and they are working to chip away at this and other narrow legal issues. They are also talking to academic authors, urging them to be more careful about which rights they assign to journals who publish their articles.

Larry Lessig (Stanford Law) speaks fourth. He starts by saying that most of his problems are caused by his allies, but his opponents are nicer and more predictable in some ways. Why? (1) Need to unite technologists and lawyers. (2) Need to unite libertarians and liberals. Regarding tech and law, the main conflict is about what constitutes success. He says technologists want 99.99% success, lawyers are happy with 60%. (I don’t think this is quite right.) He says that fair use and network neutrality are essentially the same issue, but they’re handled inconsistently. He dislikes the fair use system (though he likes fair use itself) because the cost and uncertainty of the system bias so strongly against use without permission, even when those uses ought to be fair – people don’t want to be right, they want to avoid having suits filed against them. Net neutrality, he says, is essentially the same problem as fair use, because it is about how to limit the ability of properties owners who have monopoly power (i.e., copyright owners or ISPs) to use their monopoly property rights against the public interest. The challenge is how to keep the coalition together while addressing these issues.

Gigi Sohn (PublicKnowledge) is the last speaker. Her topic is “what it’s like to be a public interest advocate on the ground.” PublicKnowledge plays a key role in doing thiis, as part of a larger coalition. She lists six strategies that are used in practice to change the debate: (1) day to day, face to face advocacy with policymakers; (2) coalition-building with other NGOs, such as Consumers Union, librarians, etc., and especially industry (different sectors on different issues); (3) message-building, both push and pull communications; (4) grassroots organizing; (5) litigation, on offense and defense (with a shout-out to EFF); (6) working with scholars to build a theoretical framework on these topics. How has it worked? “We’ve been very good at stopping bad things”: broadcast flag, analog hole, database protection laws, etc. She says they/we haven’t been so successful at making good things happen.

Time for Q&A. Tobias Robison (“Precision Blogger”) asks Gigi how to get the financial clout needed to continue the fight. Gigi says it’s not so expensive to play defense.

Sandy Thatcher (head of Penn State University Press) asks how to reconcile the legitimate needs of copyright owners with their advocacy for narrower copyright. He suggests that university presses need the DMCA to survive. (I want to talk to him about that later!) Gigi says, as usual, that PK is interested in balance, not in abolishing the core of copyright. Margaret Hedstrom says that university presses are in a tough spot, and we don’t need to have as many university presses as we have. Yochai argues that university presses shouldn’t act just like commercial presses – if university presses are just like commercial presses why should universities and scholars have any special loyalty to them?

Anne-Marie Slaughter (Dean of the Woodrow Wilson Schoel at Princeton) suggests that some people will be willing to take less money in exchange for the pyschic satisfaction of helping people by spreading knowledge. She suggests that this is a way of showing leadership. Larry Lessig answers by arguing that many people, especially those with smaller market share, can benefit financially from allowing more access. Margaret Hedstrom gives another example of scholarly books released permissively, leading to more sales.

Wes Cohen from Duke Uhiversity asserts that IP rulings (like Madey v. Duke, which vastly narrowed the experimental use exception in patent law) have had relatively litle impact on the day-to-day practice of scientific research. He asks David Einhorn whether his matches his experience. David E. says that bench scientists “are going to do what they have always done” and people are basically ignoring these rules, just hoping that one research organization will sue another and that damages will be small anyway. But, he says, the law intrudes when one organization has to get research materials from another. He argues that this is a bad thing, especially when (as in most biotech research) both organizations are funded by the same government agency. Bill [didn't catch the last name], who runs tech transfer for the University of California, says that there have been problems getting access to stem cell lines.

The second panel is on the effect of patent law. Panelists are Kathy Strandburg, Susan Mann, Wesley Cohen, Stephen Burley, and Mario Biagioli. Moderator is Rochelle Dreyfuss.

First speaker is Susan Mann (Director of IP Policy, or something like that) at Microsoft. She talks about the relation between patent law and the structure of the software industry. She says people tend not to realize how the contours of patent law shape how companies develop and design products. She gives a chronology of when and why patent law came to be applied to software. She argues that patents are better suited than copyright and trade secret for certain purposes, because patents are public, are only protected if novel and nonobvious, apply to methods of computation, and are more amenable to use in standards. She advocates process-oriented reforms to raise patent quality.

Stephen Burley (biotech researcher and entrepreneur) speaks second. He tells some stories about “me-too drugs”. Example: one of the competitors of Viagra differs from the Viagra molecule by only one carbon atom. Because of the way the viagra patent is written, the competitor could make their drug without licensing the Viagra patent. You might think this is pure free-riding, but in fact even these small differences have medical significance – in this case the drugs have the same primary effect but different side-effects. He tells another story where a new medical test cannot be independently validated by researchers because they can’t get a patent license. Here the patent is being used to prevent would-be customers from finding out about the quality of a product. (To a computer security researcher, this story sounds familiar.) He argues that the relatively free use of tools and materials in research has been hugely valuable.

Third speaker is Mario Biagioli (Harvard historian). He says that academic scientists have always been interested in patenting inventions, going back to Galileo, the Royal Society, Pascal, Huygens, and others. Galileo tried to patent the telescope. Early patents were given, not necessarily to inventors, but often to expert foreigners to give them an incentive to move. You might give a glassmaking patent to a Venetian glassmaker to give him an incentive to set up business in your city. Little explanation of how the invention worked was required, as long as the device or process produced the desired result. Novelty was not required. To get a patent, you didn’t need to invent something, you only needed to be the first to practice it in that particular place. The idea of specification – the requirement to describe the invention to the public in order to get a patent – was emphasized more recently.

Fourth speaker is Kathy Strandburg (DePaul Law). She emphasizes the social structure of science, which fosters incentives to create that are not accounted for in patent law. She argues that scientific creation is an inherently social process, with its own kind of economy of jobs and prestige. This process is pretty successful and we should be careful not to mess it up. She argues, too, that patent law doctrine hasn’t accounted adequately for innovation by users, and the tendency of users to share their innovations freely. She talks about researchers as users. When researchers are designing and using tools, they acting as both scientists and users, so both of the factors mentioned so far will operate, to make the incentive bigger than the standard story would predict. All of this argues for a robust research use exemption – a common position that seems to be emerging from several speakers so far.

Fifth and final speaker is Wesley Cohen (Duke economist). He presents his research on the impact of patents on the development and use of biotech research tools. There has been lots of concern about patenting and overly strict licensing of research tools by universities. His group did empirical research on this topic, in the biotech realm. Here are the findings. (1) Few scientists actually check whether patents might apply to them, even when their institutions tell them to check. (2) When scientists were aware of a patent they needed to license, licenses were almost always available at no cost. (3) Only rarely do scientists change their research direction because of concern over others’ patents. (4) Though patents have little impact, the need to get research materials is a bigger impediment (scientists couldn’t get a required input 20% of the time), and leads more often to changes in research direction because of inability to get materials. (5) When scientists withheld materials from their peers, the most common reasons were (a) research business activity related to the material, and (b) competition between scientists. His bottom-line conclusion: “law on the books is not the same as law in action”.

Now for the Q&A. Several questions to Wes Cohen about the details of his study results. Yochai Benkler asks, in light of the apparent practical irrelevance of patents in biotech research, what would happen if the patent system started applying strongly to that research. Wes Cohen answers that this is not so likely to happen, because there is a norm of reciprocity now, and there will still be a need to maintain good relations between different groups and institutions. It seems to me that he isn’t arguing that Benkler’s hypothetical woudn’t be harmful, just that the hypo is unlikely to happen. (Guy in the row behind me just fell asleep. I think the session is pretty interesting…)

After lunch, we have a speech by Sergio Sa Leitao, Brazil’s Minister of Cultural Policies. He speaks in favor of cultural diversity – “a read-only culture is not right for Brazil” – and how to reconcile it with IP. His theme is the need to face up to reality and figure out how to cope with changes brought on by technology. He talks specifically about the music industry, saying that they lots precious time trying to maintain a business model that was no longer relevant. He gives some history of IP diplomacy relating to cultural diversity, and argues for continued attention to this issue in international negotiations about IP policy. He speaks in favor of a UNESCO convention on cultural diversity.

In the last session of the day, I’ll be attending a panel on compulsory licensing. I’ll be on the panel, actually, so I won’t be liveblogging.

avatar

Princeton-Microsoft Intellectual Property Conference

Please join us for the 2006 Princeton University – Microsoft Intellectual Property Conference, Creativity & I.P. Law: How Intellectual Property Fosters or Hinders Creative Work, May 18-19 at Princeton University. This public conference will explore a number of strategies for dealing with IP issues facing creative workers in the fields of information technology, biotechnology, the arts, and archiving/humanities.

The conference is co-sponsored by the Center for Arts and Cultural Policy Studies, the Program in Law and Public Affairs, and the Center for Information Technology Policy at the Woodrow Wilson School of Public and International Affairs and funded by the Microsoft Corporation, with additional support from the Rockefeller Foundation.

The conference features keynote addresses from Lawrence Lessig, Professor of Law at Stanford Law School, and Raymond Gilmartin, former CEO of Merck, Inc. A plenary address will be delivered by Sérgio Sá Leitão, Secretary for Cultural Policies at the Ministry of Culture, Brazil.

Six panels, bringing together experts from various disciplines and sectors, will examine the following topics:

  • Organizing the public interest
  • The construction of authorship
  • Patents and creativity
  • Tacit knowledge and the pragmatics of creative work: can IP law keep up?
  • Compulsory licensing: a solution to multiple-rights-induced gridlock?
  • New models of innovation: blurring boundaries and balancing conflicting norms

We expect the conference to generate a number of significant research initiatives designed to collect and analyze empirical data on the relationship between intellectual property regimes and the practices of creative workers.

Registration for the conference is strongly encouraged as space is limited for some events. For additional information and to register, please visit the conference web site. Online registration will be available beginning Friday, April 14.

We hope to see you in May.

Stanley N. Katz, Director, Center for Arts and Cultural Policy Studies
Paul J. DiMaggio, Research Director, Center for Arts and Cultural Policy Studies
Edward W. Felten, Director, Center for Information Technology Policy

avatar

Intellectual Property, Innovation, and Decision Architectures

Tim Wu has an interesting new draft paper on how public policy in areas like intellectual property affects which innovations are pursued. It’s often hard to tell in advance which innovations will succeed. Organizational economists distinguish centralized decision structures, in which one party decides whether to proceed with a proposed innovation, from decentralized structures, in which any one of several parties can decide to proceed.

This distinction gives us a new perspective on when intellectual property rights should be assigned, and what their optimal scope is. In general, economists favor decentralized decision structures in economic systems, based on the observation that free market economies perform better than planned centralized economies. This suggests – even accepting the useful incentives created by intellectual property – at least one reason to be cautious about the assignment of broad rights. The danger is that centralization of investment decision-making may block the best or most innovative ideas from coming to market. This concern must be weighed against the desirable ex ante incentives created by an intellectual property grant.

This is an interesting observation that opens up a whole series of questions, which Wu discusses briefly. I can’t do his discussion justice here, so I’ll just extract two issue he raises.

The first issue is whether the problems with centralized management can be overcome by licensing. Suppose Alice owns a patent that is needed to build useful widgets. Alice has centralized control over any widget innovation, and she might make bad decisions about which innovations to invest in. Suppose Bob believes that quabbling widgets will be a big hit, but Alice doesn’t like them and decides not to invest in them. If Bob can pay Alice for the right to build quabbling widgets, then perhaps Bob’s good sense (in this case) can overcome Alice’s doubts. Alice is happy to take Bob’s money in exchange for letting him sell a product that she thinks will fail; and quabbling widgets get built. If the story works out this way, then the centralization of decisionmaking by Alice isn’t much of a problem, because anyone who has a better idea (or thinks they do) can just cut a deal with Alice.

But exclusive rights won’t always be licensed efficiently. The economic literature considers the conditions under which efficient licensing will occur. Suffice it to say that this is a complicated question, and that one should not simply assume that efficient licensing is a given. Disruptive technologies are especially likely to go unlicensed.

Wu also discusses, based on his analysis, which kinds of industries are the best candidates for strong grants of exclusive rights.

An intellectual property regime is most clearly desirable for mature industries, by definition technologically stable, and with low or negative economic growth…. [I]f by definition profit margins are thin in a declining industry, it will be better to have only the very best projects come to market…. By the same logic, the case for strong intellectual property protections may be at its weakest in new industries, which can be described as industries that are expanding rapidly and where technologies are changing quickly…. A [decentralized] decision structure may be necessary to uncover the innovative ideas that are the most valuable, at the costs of multiple failures.

As they say in the blogosphere, read the whole thing.