April 20, 2024

Archives for 2011

A Possible Constitutional Caveat to SOPA

Tomorrow, a hearing in the House will consider H.R. 3261, the Stop Online Piracy Act (SOPA). There are many frustrating and concerning aspects of this bill. Perhaps most troubling, the current proposed text would undermine the existing safe harbor regime that gives online service providers clear, predictable, and reasonable obligations with respect to their users’ sometime infringing activities. Under SOPA’s new, ambiguous requirements, an online service provider might find itself in court merely for refusing to deploy a new technological surveillance system offered to it by rightholders — because such a refusal could be considered “deliberate action[] to avoid confirming” infringements by its users.

SOPA also incorporates DNS blocking provisions, substantively similar to the Senate’s PROTECT IP Act, that are designed to obstruct Americans’ access to “foreign infringing site[s].” It empowers the Attorney General to require a service provider to “take technically feasible and reasonable measures designed to prevent access by its subscribers located within the United States to the foreign infringing site.” This is a deeply troubling provision — and a stark departure from existing law — in that it would put U.S. Internet Service Providers into the business of obstructing, rather than providing, Americans’ access to the worldwide Internet, and would do so coarsely, since those “reasonable measures” could easily apply to whole sites rather than particular infringing pages or sections.

Intriguingly, a site is a “foreign infringing site” only if — among other criteria — it “would . . . be subject to seizure in the United States in an action brought by the Attorney General if such site were a domestic Internet site.” This is a clear reference to Operation in Our Sites, the ongoing program in which federal officials (acting under a controversial interpretation of current law) “seize” targeted sites by taking control over their Internet domain names, and cause those names to point to sternly-worded warning banners instead of pointing to the targeted site. Because these seizures impact whole sites (rather than just the offending pages or files), and because they occur before the defendant receives notice or an opportunity for an adversary hearing, they raise serious First Amendment concerns. In other words, although many sites have been seized in fact, it is far from clear that any of the sites are validly “subject to seizure” under a correct interpretation of current law. As I wrote in a recent working paper,

When a domain name seizure under the current process is effective, it removes all the content at the targeted domain name, potentially including a significant amount of protected expressive material, from public view—without any court hearing. In other words, these are ex parte seizures of web sites that may contain, especially in the context of music or film blogs where people share opinions about media and offer their own remixes, significant amounts of First Amendment protected expression.

The Supreme Court has held in the obscenity context that “[w]hile a single copy of a book or film may be seized and retained for evidentiary purposes based on a finding of probable cause, the publication may not be taken out of circulation completely until there has been a determination of obscenity after an adversary hearing.” The hearing is needed because the evidentiary burden for restraining speech is greater than the burden for obtaining a warrant: “[M]ere probable cause to believe a legal violation has transpired is not adequate to remove books or films from circulation.” In that case, the speech at issue was alleged to be obscene, and hence unprotected by the First Amendment, but the Court held that the unlawfulness of the speech needed to be confirmed through an adversary hearing, before the speech could be suppressed.

ICE does make some ex parte effort, before a seizure, to verify that the websites it targets are engaged in criminal IPR infringement: As the agency’s director testified, “[f]or each domain name seized, ICE investigators independently obtained counterfeit trademarked goods or pirated copyrighted material that was in turn verified by the rights holders as counterfeit.”

Domain seizures might be distinguished from these earlier cases because “it is only the property interest in a domain name that is being seized here, not the content of the web site itself or the servers that the content resides on.” But the gravamen of the Supreme Court’s concern in past cases has been the amount of expression suppressed by a seizure, rather than the sheer quantity of items seized.

The operators of one targeted site are currently challenging an In Our Sites seizure, on First Amendment and other grounds, in an appeal before the Second Circuit. (An amicus brief submitted by EFF, CDT and Public Knowledge makes for excellent background reading.) The district court ruling below apparently turned on the defendant’s ability to demonstrate financial harm (rather than on the First Amendment issues), so it is possible that the Court may not reach the First Amendment issue. But it is also possible that the Second Circuit may take a dim view of the constitutionality of the In Our Sites seizures — and by extension, a dim view of the Attorney General’s constitutional power to “subject [domestic web sites] to seizure.” In that event, the scope of this portion of SOPA may turn out to be much narrower than its authors intend.

Don't Regulate the Internet. No, Wait. Regulate the Internet.

When Congress considered net neutrality legislation in the form of the Internet Freedom Preservation Act of 2008 (H.R. 5353), representatives of corporate copyright owners weighed in to oppose government regulation of the Internet. They feared that such regulation might inhibit their private efforts to convince ISPs to help them enforce copyrights online through various forms of broadband traffic management (e.g., filtering and bandwidth shaping). “Our view,” the RIAA’s Mitch Bainwol testified at a Congressional hearing, “is that the marketplace is generally a better mechanism than regulation for addressing such complex issues as how to address online piracy, and we believe the marketplace should be given the chance to succeed.” And the marketplace presumably did succeed, at least from the RIAA’s point of view, when ISPs and corporate rights owners entered into a Memorandum of Understanding last summer to implement a standardized, six-strikes graduated response protocol for curbing domestic illegal P2P file sharing. Chalk one up for the market.

What, then, should we make of the RIAA’s full-throated support for the Senate’s pending PROTECT IP Act (S. 968) and its companion bill in the House, SOPA (H.R. 3261)? PROTECT IP and SOPA are bills that would regulate the technical workings of the Internet by requiring operators of domain name servers to block user access to “rogue websites”—defined in PROTECT IP as sites “dedicated to infringing activities”—by preventing the domain names for those sites from resolving to their corresponding IP addresses. In a recent RIAA press release on PROTECT IP, the RIAA’s Bainwol praised the draft legislation, asserting the need for—you guessed it—new government regulation of the Internet: “[C]urrent laws have not kept pace with criminal enterprises that set up rogue websites overseas to escape accountability.” So much, I guess, for giving the marketplace the chance to succeed.

As the Social Science Research Council’s groundbreaking 2011 report on global piracy concluded, the marketplace could succeed in addressing the problem of piracy beyond U.S. borders if corporate copyright owners were willing to address global disparities in the accessibility of legal digital goods. As the authors explain, “[t]he flood of legal media goods available in high-income countries over the past two decades has been a trickle in most parts of the world.” Looking at the statistics on piracy in the developing world from the consumption side rather than the production side, the SSRC authors assert that what developing markets want and need are “price and service innovations” that have already been rolled out in the developed world. Who is in a better position to deliver such innovations, through the global marketplace, than the owners of copyrights in digital entertainment and information goods? Why not give the marketplace another chance to succeed, particularly when the alternative presented is a radical policy intrusion into the fundamental operation of the Internet?

The RIAA’s political strategy in the war on piracy has been alternately to oppose and support government regulation of the Internet, depending on what’s expedient. I wonder if rights owners and the trade groups that represent them experience any sense of cognitive dissonance when they advocate against something at one moment and for it a little while later—to the same audience, on the same issue.

Is Insurance Regulation the Next Frontier in Open Government Data?

My friend Ray Lehman points to an intriguing opportunity to expand public access to government data: insurance regulation. The United States has a decentralized, state-based system for regulating the insurance industry. Insurance companies are required to disclose data on their premiums, claims, assets, and many other topics, to state regulators for each state in which they do business. These data are then shared with the National Association of Insurance Commissioners, a private, non-profit organization that combines it and then sells access to the database. Ray tells the story:

The major clients for the NAIC’s insurance data are market analytics firms like Charlottesville, Va.-based SNL Financial and insurance rating agency A.M. Best (Full disclosure: I have been, at different times, an employee at both firms) who repackage the information in a lucrative secondary market populated by banks, broker-dealers, asset managers and private investment funds. While big financial institutions make good use of the data, the rates charged by firms like Best and SNL tend to be well out of the price range of media and academic outlets who might do likewise.

And where a private stockholder interested in reading the financials of a company whose shares he owns can easily look up the company’s SEC filings, a private policyholder interested in, say, the reserves held by the insurer he has entrusted to protect his financial future…has essentially nowhere to turn.

However, Ray points out that the recently-enacted Dodd-Frank legislation may change that, as it creates a new Federal Insurance Office. That office will collect data from state regulators and likely has the option to disclose that data to the general public. Indeed, Ray argues, the Freedom of Information Act may even require that the data be disclosed to anyone who asks. The statute is ambiguous enough that in practice it’s likely to be up to FIO director Michael McRaith to decide what to do with the data.

I agree with Ray that McRaith should make the data public. As several CITP scholars have argued, free bulk access to government data has the potential to create significant value for the public. These data could be of substantial value for journalists covering the insurance industry and academics studying insurance markets. And with some clever hacking, it could likely be made useful for consumers, who would have more information with which to evaluate the insurance companies in their state.

ACM opens another hole in the paywall

Last month I wrote about Princeton University’s new open-access policy. In fact, Princeton’s policy just recognizes where many disciplines and many scholarly publishers were going already. Most of the important publication venues in Computer Science already have an open-access policy–that is, their standard author copyright contract permits an author to make copies of his or her own paper available on the author’s personal web site or institutional repository. These publishers include the Association for Computing Machinery (ACM), the Institute for Electrical and Electronics Engineers (IEEE), Springer Verlag (for their LNCS series of conference proceedings), Cambridge University Press, MIT Press, and others.

For example, the ACM’s policy states,

Under the ACM copyright transfer agreement, the original copyright holder retains … the right to post author-prepared versions of the work covered by ACM copyright in a personal collection on their own Home Page and on a publicly accessible server of their employer, and in a repository legally mandated by the agency funding the research on which the Work is based. Such posting is limited to noncommercial access and personal use by others, and must include this notice both embedded within the full text file and in the accompanying citation display as well:

“© ACM, YYYY. This is the author’s version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in PUBLICATION, {VOL#, ISS#, (DATE)} http://doi.acm.org/10.1145/nnnnnn.nnnnnn”

But now the ACM is trying something new; a mass mailing from ACM’s Director of Publications explains,

ACM has just launched a new referrer-linking service. It is called the ACM Author-Izer Service. In essence, ACM Author-Izer enables you to provide a free access to the definitive versions of your ACM articles permanently maintained by ACM in its Digital Library by embedding the links generated by this service in your personally maintained home-page bibliographies.

With widespread usage of this service, the need to post your author-prepared versions should be alleviated; automatic indexers will point to the article in the DL rather than alternative versions hosted elsewhere without the promise of being permanently maintained.

The ACM has not removed the author’s right to self-post copies of the articles, but clearly the publisher wants to discourage that, and to be the only source for content. Furthermore, authors can use this only if they buy in to the ACM’s “Author Profile” page, a feature that ACM has been pushing but that I suspect most authors don’t bother with. It’s an interesting strategy to capture links, or to reduce the number of copies floating around outside the control of the ACM archive. Whether it works may depend, in part, on how difficult it is for authors to use. I suspect most authors won’t bother, but if you want to see some Author-Ized links in action, click here and then click on “A Theory of Indirection via Approximation.” (I can’t link directly from this article, because the ACM permits this service from only one Web address.)

Unlike some newspapers, which are suffering badly in the Internet age, major nonprofit scholarly publishers such as the ACM are in good financial health, with a diverse array of activities and revenue sources: membership dues, conferences, refereed journals, magazines, paid job-advertisement web sites, and so on. Still, there is a lot of experimentation about how to survive as a publisher in the 21st century, and this appears to be the latest experiment.

Appeal filed in NJ voting-machines lawsuit

Paperless (DRE) voting machines went on trial in New Jersey in 2009, in the Gusciora v. Corzine lawsuit. In early 2010 Judge Linda Feinberg issued an Opinion that was flawed in many ways (factually and legally). But Judge Feinberg did at least recognize that DRE voting machines are vulnerable to software-based election fraud, and she ordered several baby-step remedies (improved security for voting machines and vote-tabulating computers). She retained jurisdiction for over a year, waiting for the State to comply with these remedies; the State never did, so eventually she gave up, and signed off on the case. But her retention of jurisdiction for such a long period prevented the Plaintiffs from appealing her ruling, until now.

The Appellate Division of the NJ court system agreed to hear an appeal, and the Plaintiffs (represented by Penny Venetis of Rutgers Law School and John McGahren and Caroline Bartlett of Patton Boggs) filed their appeal on October 12, 2011: you can read it here.

Plaintiffs point out that Judge Feinberg made many errors of law: she improperly permitted nonexpert defense witnesses (employees of Sequoia Voting Systems) to testify as experts, she improperly barred certain of Plaintiffs’ expert testimony, and she misapplied case law from other jurisdictions. Her misapplication of Schade v. Maryland was particularly egregious: she appropriated testimony and conclusions of Dr. Michael Shamos (defense expert witness in both the NJ and the MD cases) on topics which she had barred Dr. Shamos from testifying about in the NJ case. Worse yet, it’s quite inappropriate to use these out-of-state cases in which DREs were defended, when almost every one of those states subsequently abandoned DREs even when they won their lawsuits. In the case of Schade v. Maryland, Schade claimed that Diebold voting machines were insecure and unreliable; the Maryland court decided otherwise; but soon afterward, the legislature (convinced that the Diebold DREs were insecure and unreliable) unanimously passed a bill requiring a voter-veriable paper record.

Finally, Judge Feinberg made many errors of fact. In a nonjury civil lawsuit in New Jersey, the appeals court has authority to reconsider all factual conclusions, especially in a case such as this one where there is a clear and voluminous trial record. For example, Plaintiffs presented many kinds of evidence about how easy it is to use software-based and hardware-based methods to to steal votes on the Sequoia DRes, and the State defendants presented no witnesses at all who refuted this testimony. Here, by not taking account of these facts, Judge Feinberg made reversible errors.