March 19, 2024

Archives for October 2011

Is Insurance Regulation the Next Frontier in Open Government Data?

My friend Ray Lehman points to an intriguing opportunity to expand public access to government data: insurance regulation. The United States has a decentralized, state-based system for regulating the insurance industry. Insurance companies are required to disclose data on their premiums, claims, assets, and many other topics, to state regulators for each state in which they do business. These data are then shared with the National Association of Insurance Commissioners, a private, non-profit organization that combines it and then sells access to the database. Ray tells the story:

The major clients for the NAIC’s insurance data are market analytics firms like Charlottesville, Va.-based SNL Financial and insurance rating agency A.M. Best (Full disclosure: I have been, at different times, an employee at both firms) who repackage the information in a lucrative secondary market populated by banks, broker-dealers, asset managers and private investment funds. While big financial institutions make good use of the data, the rates charged by firms like Best and SNL tend to be well out of the price range of media and academic outlets who might do likewise.

And where a private stockholder interested in reading the financials of a company whose shares he owns can easily look up the company’s SEC filings, a private policyholder interested in, say, the reserves held by the insurer he has entrusted to protect his financial future…has essentially nowhere to turn.

However, Ray points out that the recently-enacted Dodd-Frank legislation may change that, as it creates a new Federal Insurance Office. That office will collect data from state regulators and likely has the option to disclose that data to the general public. Indeed, Ray argues, the Freedom of Information Act may even require that the data be disclosed to anyone who asks. The statute is ambiguous enough that in practice it’s likely to be up to FIO director Michael McRaith to decide what to do with the data.

I agree with Ray that McRaith should make the data public. As several CITP scholars have argued, free bulk access to government data has the potential to create significant value for the public. These data could be of substantial value for journalists covering the insurance industry and academics studying insurance markets. And with some clever hacking, it could likely be made useful for consumers, who would have more information with which to evaluate the insurance companies in their state.

ACM opens another hole in the paywall

Last month I wrote about Princeton University’s new open-access policy. In fact, Princeton’s policy just recognizes where many disciplines and many scholarly publishers were going already. Most of the important publication venues in Computer Science already have an open-access policy–that is, their standard author copyright contract permits an author to make copies of his or her own paper available on the author’s personal web site or institutional repository. These publishers include the Association for Computing Machinery (ACM), the Institute for Electrical and Electronics Engineers (IEEE), Springer Verlag (for their LNCS series of conference proceedings), Cambridge University Press, MIT Press, and others.

For example, the ACM’s policy states,

Under the ACM copyright transfer agreement, the original copyright holder retains … the right to post author-prepared versions of the work covered by ACM copyright in a personal collection on their own Home Page and on a publicly accessible server of their employer, and in a repository legally mandated by the agency funding the research on which the Work is based. Such posting is limited to noncommercial access and personal use by others, and must include this notice both embedded within the full text file and in the accompanying citation display as well:

“© ACM, YYYY. This is the author’s version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in PUBLICATION, {VOL#, ISS#, (DATE)} http://doi.acm.org/10.1145/nnnnnn.nnnnnn”

But now the ACM is trying something new; a mass mailing from ACM’s Director of Publications explains,

ACM has just launched a new referrer-linking service. It is called the ACM Author-Izer Service. In essence, ACM Author-Izer enables you to provide a free access to the definitive versions of your ACM articles permanently maintained by ACM in its Digital Library by embedding the links generated by this service in your personally maintained home-page bibliographies.

With widespread usage of this service, the need to post your author-prepared versions should be alleviated; automatic indexers will point to the article in the DL rather than alternative versions hosted elsewhere without the promise of being permanently maintained.

The ACM has not removed the author’s right to self-post copies of the articles, but clearly the publisher wants to discourage that, and to be the only source for content. Furthermore, authors can use this only if they buy in to the ACM’s “Author Profile” page, a feature that ACM has been pushing but that I suspect most authors don’t bother with. It’s an interesting strategy to capture links, or to reduce the number of copies floating around outside the control of the ACM archive. Whether it works may depend, in part, on how difficult it is for authors to use. I suspect most authors won’t bother, but if you want to see some Author-Ized links in action, click here and then click on “A Theory of Indirection via Approximation.” (I can’t link directly from this article, because the ACM permits this service from only one Web address.)

Unlike some newspapers, which are suffering badly in the Internet age, major nonprofit scholarly publishers such as the ACM are in good financial health, with a diverse array of activities and revenue sources: membership dues, conferences, refereed journals, magazines, paid job-advertisement web sites, and so on. Still, there is a lot of experimentation about how to survive as a publisher in the 21st century, and this appears to be the latest experiment.

Appeal filed in NJ voting-machines lawsuit

Paperless (DRE) voting machines went on trial in New Jersey in 2009, in the Gusciora v. Corzine lawsuit. In early 2010 Judge Linda Feinberg issued an Opinion that was flawed in many ways (factually and legally). But Judge Feinberg did at least recognize that DRE voting machines are vulnerable to software-based election fraud, and she ordered several baby-step remedies (improved security for voting machines and vote-tabulating computers). She retained jurisdiction for over a year, waiting for the State to comply with these remedies; the State never did, so eventually she gave up, and signed off on the case. But her retention of jurisdiction for such a long period prevented the Plaintiffs from appealing her ruling, until now.

The Appellate Division of the NJ court system agreed to hear an appeal, and the Plaintiffs (represented by Penny Venetis of Rutgers Law School and John McGahren and Caroline Bartlett of Patton Boggs) filed their appeal on October 12, 2011: you can read it here.

Plaintiffs point out that Judge Feinberg made many errors of law: she improperly permitted nonexpert defense witnesses (employees of Sequoia Voting Systems) to testify as experts, she improperly barred certain of Plaintiffs’ expert testimony, and she misapplied case law from other jurisdictions. Her misapplication of Schade v. Maryland was particularly egregious: she appropriated testimony and conclusions of Dr. Michael Shamos (defense expert witness in both the NJ and the MD cases) on topics which she had barred Dr. Shamos from testifying about in the NJ case. Worse yet, it’s quite inappropriate to use these out-of-state cases in which DREs were defended, when almost every one of those states subsequently abandoned DREs even when they won their lawsuits. In the case of Schade v. Maryland, Schade claimed that Diebold voting machines were insecure and unreliable; the Maryland court decided otherwise; but soon afterward, the legislature (convinced that the Diebold DREs were insecure and unreliable) unanimously passed a bill requiring a voter-veriable paper record.

Finally, Judge Feinberg made many errors of fact. In a nonjury civil lawsuit in New Jersey, the appeals court has authority to reconsider all factual conclusions, especially in a case such as this one where there is a clear and voluminous trial record. For example, Plaintiffs presented many kinds of evidence about how easy it is to use software-based and hardware-based methods to to steal votes on the Sequoia DRes, and the State defendants presented no witnesses at all who refuted this testimony. Here, by not taking account of these facts, Judge Feinberg made reversible errors.

The Digital Death of Copyright's First Sale Doctrine

The legal media’s attention has been focused this past week on Supreme Court oral arguments in Golan v. Holder, an important copyright case involving the power of Congress to “restore” private rights in creative works that are already in the public domain. In this post, I’d like to focus on an important copyright case that won’t be argued in the Supreme Court. On October 3, the Supreme Court declined to review Vernor v. Autodesk, a Ninth Circuit Court of Appeals decision involving the applicability of copyright’s first sale doctrine to transactions involving software and other digital information goods.

The first sale doctrine is the provision in copyright law that gives the purchaser of a copy of a copyrighted work the right to sell or otherwise dispose of that copy without the permission of the copyright owner. If there were no first sale doctrine, there would be no free market for used books, CDs, or DVDs, because the copyright owner’s right of distribution would reach beyond the first sale, all the way down the stream of commerce. Without the first sale doctrine, movie rental services like Netflix and Redbox wouldn’t be able to lend DVDs without authorization from studios, and you wouldn’t be able to lend the bestseller you just finished to a friend without authorization from the book’s author or publisher. Along with fair use, the first sale doctrine promotes public access to culture and information by functioning as a crucial limit on the right of a copyright owner to control the disposition of a copyrighted work. A world without the first sale doctrine in it is a world I wouldn’t want to live in, but it’s one that’s quickly taking shape, thanks in part to legal decisions like the one in Vernor.

The Ninth Circuit’s decision in Vernor significantly erodes the first sale doctrine with respect to software and other mass-licensed digital goods. The plaintiff in the case, Timothy Vernor, bought several copies of an AutoCad software package from a direct customer of the software publisher. Vernor then resold the copies on eBay, only to be accused of having infringed the software publisher’s copyrights. He sued, seeking a declaration from the court that his sale of the software on eBay was protected by the first sale doctrine because the software publisher’s distribution right was exhausted by its “first sale” of the copies to its direct customer. By Vernor’s logic, software should be able to be purchased and resold in the same way that a book can be purchased and resold. Why should the two be treated any differently under copyright law, after all? The buyer of a book owns her copy of the book as personal property, and her ownership of that individual copy does not at all interfere with the author’s ownership of the copyright in the work. Once the author places a particular copy of the work into the stream of commerce, the author loses the right to control the fate of that copy; the author retains, however, all of the rights the statute gives her in the copyrighted work itself, including the exclusive right to make and sell new copies of it.

The Copyright Act makes an explicit distinction between ownership of a copy of a copyrighted work–whether that copy is printed on paper or burned onto an optical disk or stored in a computer’s memory–and ownership of the copyright in the work. The copy is tangible property owned by the purchaser; the copyrighted work embodied in the copy is intangible property owned by the author. A copy is a copy, the work is the work, and never the twain shall meet. In Timothy Vernor’s case, however, the publisher of the AutoCad software argued that it never actually sold the copies Vernor bought, so there was no “first sale” for copyright purposes. Under the software publisher’s logic, which the Ninth Circuit adopted in the case, both the copy and the intellectual property embodied in the copy were only licensed, and quite restrictively so, pursuant to the terms of a mass end user license agreement (EULA); nothing was ever sold, despite the retail transaction that put copies of the software into the hands of the initial purchaser, and despite the downstream transaction that put those copies into Timothy Vernor’s hands.

Existing copyright case law makes it clear that digital copies of works, even those stored only ephemerally in RAM, are “copies” within the meaning of the Copyright Act. Moreover, the the Copyright Act is clear on its face that there is a difference between ownership of a copy of a work and ownership of the work embodied in the copy. Decisions like the Ninth Circuit’s in Vernor v. Autodesk, however, permit copyright owners to conflate the copy and the work to the detriment of consumers in cases involving digital goods. Under Vernor, software copyright owners not only own the work embodied in every copy of a program they sell, they own every copy, too. Consumers are left with both empty pockets and empty hands.

As the transition from physical to streaming or cloud-based digital distribution continues, further divorcing copyrighted works from their traditional tangible embodiments, it will increasingly be the case that consumers do not own the information goods they buy (or, rather, think they’ve bought). Under the court’s decision in Vernor, all a copyright owner has to do to effectively repeal the statutory first sale doctrine is draft a EULA that (1) specifies that the user is granted a license; (2) significantly restricts the user’s ability to transfer the software; and (3) imposes notable use restrictions. Sad to say, it’s about as easy as falling off a log.

Decrees and Buses: How the Open Government Partnership Translates into Action in Brazil

The U.S. and Brazil teamed up to form an important global initiative, the Open Government Partnership (OGP). The project was launched by President Obama and President Dilma Rousseff right before the General Assembly of the United Nations this year (which by the way, was opened for the first time by a woman, President Rousseff).

The initiative outlines four key commitments to be undertaken by participating governments: a) increase the availability of information about government activities; b) support civic participation; c) implement the highest standards of professional integrity; d) increase access to new technologies for openness and accountability. Until September 20 the OGP declaration had been endorsed by Indonesia, Mexico, Norway, Philippines, South Africa, UK, US, and Brazil.

It is doubtless a great initiative, which has been supported by many NGO’s and other civil society organizations worldwide. However, a question remains about how the initiative will be translated into concrete action at each participating country. One criticism often heard in Brazil is that the initiative has not be so broadly publicized internally as it has been publicized internationally.

A few reasons might help explain that. One is the OGP´s commitment that requires governments to “increase the availability of information about government activities”. Brazil still does not have a “freedom of information act”. Citizens seeking for government information have to navigate through several different pieces of legislation, none of them providing a comprehensive and satisfactory solution. The very few avenues that exist for access to information include class action laws (“ação civil pública” and “ação popular”), and the traditional writ of mandamus. Neither of them is really practical, nor accessible to the common citizen.

In the meantime, congress has been discussing since April 2010 a draft bill that would truly enact an effective freedom of information law in the country. Named “PLC 41/2010”, it quickly proved itself a controversial proposal. Members of Senate, most notably Senator Fernando Collor, resorted to procedural speed bumps in order to slow down the project discussion, to the point that it has been currently halted at the Senate. Senator Collor also proposed a substitute version to the text, which would make the law basically toothless.

One of the reasons for the controversy is that it would grant access to documents from the military government, which ruled Brazil from 1964 until 1985. The substitute version presented by Senator Collor includes exceptions in the law that would virtually create “eternal confidentiality” for certain documents. That proposal has been harshly criticized by both civil society and the press, but to no avail. The substitute version still has to be voted, and the “eternal confidentiality” can actually end up being incorporated in the text. The final vote might take months, or even years to happen. And Brazil will remain without a freedom of information law until them, in spite of its commitments under the OGP.

That is not, however, the end of the story. Fully aware that Congress might take time to enact the law, President Rousseff was quick enough to issue a federal decree establishing an array of provisions in support to open government. The Decree was enacted in September 15th, 2011, right in time for the UN General Assembly and the launch of the OGP. The president exercised her powers granted under Article 84 of the Brazilian Constitution, whereby the president has the authority to provide for the “organization and structure of federal administration, in the cases where there is neither increase of expenses nor creation or extinction of public agencies.”

The Decree establishes a set of principles to promote of open government, and creates an inter-ministerial committee (called GICA), with the mandate to propose and coordinate open government initiatives inside the federal government. The Decree does get to the point of creating concrete steps or goals that must be implemented, but it certainly creates a framework in which these concrete steps might (or not) emerge.

Being a federal decree, it is binding only to the federal government. However, other state and city governments have been adopting policies consistent with the commitments of the OGP. One example is the city of Sao Paulo that recently enacted a decree mandating that every learning material produced by the city must be licensed and made available under an open license, such as Creative Commons. That is just one example of the growing Open Educational Resources movement in Brazil, and the strength of civil society in pushing for an open government agenda much before OGP took place.

One of my favorite examples, however, is the “Hacker Bus” (“Onibus Hacker”), an initiative by a group of hacktivists called Transparencia Hacker. They had been pushing for the open government agenda for years. Tired of being stood up by state and city government officers, they decided to resort to Catarse, a Brazilian version for Kickstarter. They asked money to buy an old bus that would then travel around Brazil promoting meetings between the vagrant group of hakers and city and state government authorities. They quickly raised R$58,000 (approximately $32,000) and the bus has been acquired. The group is in the final preparations to start their hacker trip, and it is expected to increase visibility to the commitments undertaken by Brazil under the OGP.

In short, the Open Government agenda in Brazil is not a new one, and it certainly sounds like an expected development the fact that Brazil and the U.S. are co-chairing the Open Government Partnership. There will be more speed bumps ahead, either in the way of Congress or in the way of the hacker bus, but at least both seem to be bearing the right direction.

UPDATE: The Brazilian Freedom of Information Law was passed on October 25th, rejecting the “eternal confidentiality” articles and the substitute version prepared by Senator Collor, therefore sticking to the original (and better) text. It now remains to see how the law will be actually implemented, and if access to public information will become an effective tangible right for most citizens.