November 24, 2024

Is Insurance Regulation the Next Frontier in Open Government Data?

My friend Ray Lehman points to an intriguing opportunity to expand public access to government data: insurance regulation. The United States has a decentralized, state-based system for regulating the insurance industry. Insurance companies are required to disclose data on their premiums, claims, assets, and many other topics, to state regulators for each state in which they do business. These data are then shared with the National Association of Insurance Commissioners, a private, non-profit organization that combines it and then sells access to the database. Ray tells the story:

The major clients for the NAIC’s insurance data are market analytics firms like Charlottesville, Va.-based SNL Financial and insurance rating agency A.M. Best (Full disclosure: I have been, at different times, an employee at both firms) who repackage the information in a lucrative secondary market populated by banks, broker-dealers, asset managers and private investment funds. While big financial institutions make good use of the data, the rates charged by firms like Best and SNL tend to be well out of the price range of media and academic outlets who might do likewise.

And where a private stockholder interested in reading the financials of a company whose shares he owns can easily look up the company’s SEC filings, a private policyholder interested in, say, the reserves held by the insurer he has entrusted to protect his financial future…has essentially nowhere to turn.

However, Ray points out that the recently-enacted Dodd-Frank legislation may change that, as it creates a new Federal Insurance Office. That office will collect data from state regulators and likely has the option to disclose that data to the general public. Indeed, Ray argues, the Freedom of Information Act may even require that the data be disclosed to anyone who asks. The statute is ambiguous enough that in practice it’s likely to be up to FIO director Michael McRaith to decide what to do with the data.

I agree with Ray that McRaith should make the data public. As several CITP scholars have argued, free bulk access to government data has the potential to create significant value for the public. These data could be of substantial value for journalists covering the insurance industry and academics studying insurance markets. And with some clever hacking, it could likely be made useful for consumers, who would have more information with which to evaluate the insurance companies in their state.

The Digital Death of Copyright's First Sale Doctrine

The legal media’s attention has been focused this past week on Supreme Court oral arguments in Golan v. Holder, an important copyright case involving the power of Congress to “restore” private rights in creative works that are already in the public domain. In this post, I’d like to focus on an important copyright case that won’t be argued in the Supreme Court. On October 3, the Supreme Court declined to review Vernor v. Autodesk, a Ninth Circuit Court of Appeals decision involving the applicability of copyright’s first sale doctrine to transactions involving software and other digital information goods.

The first sale doctrine is the provision in copyright law that gives the purchaser of a copy of a copyrighted work the right to sell or otherwise dispose of that copy without the permission of the copyright owner. If there were no first sale doctrine, there would be no free market for used books, CDs, or DVDs, because the copyright owner’s right of distribution would reach beyond the first sale, all the way down the stream of commerce. Without the first sale doctrine, movie rental services like Netflix and Redbox wouldn’t be able to lend DVDs without authorization from studios, and you wouldn’t be able to lend the bestseller you just finished to a friend without authorization from the book’s author or publisher. Along with fair use, the first sale doctrine promotes public access to culture and information by functioning as a crucial limit on the right of a copyright owner to control the disposition of a copyrighted work. A world without the first sale doctrine in it is a world I wouldn’t want to live in, but it’s one that’s quickly taking shape, thanks in part to legal decisions like the one in Vernor.

The Ninth Circuit’s decision in Vernor significantly erodes the first sale doctrine with respect to software and other mass-licensed digital goods. The plaintiff in the case, Timothy Vernor, bought several copies of an AutoCad software package from a direct customer of the software publisher. Vernor then resold the copies on eBay, only to be accused of having infringed the software publisher’s copyrights. He sued, seeking a declaration from the court that his sale of the software on eBay was protected by the first sale doctrine because the software publisher’s distribution right was exhausted by its “first sale” of the copies to its direct customer. By Vernor’s logic, software should be able to be purchased and resold in the same way that a book can be purchased and resold. Why should the two be treated any differently under copyright law, after all? The buyer of a book owns her copy of the book as personal property, and her ownership of that individual copy does not at all interfere with the author’s ownership of the copyright in the work. Once the author places a particular copy of the work into the stream of commerce, the author loses the right to control the fate of that copy; the author retains, however, all of the rights the statute gives her in the copyrighted work itself, including the exclusive right to make and sell new copies of it.

The Copyright Act makes an explicit distinction between ownership of a copy of a copyrighted work–whether that copy is printed on paper or burned onto an optical disk or stored in a computer’s memory–and ownership of the copyright in the work. The copy is tangible property owned by the purchaser; the copyrighted work embodied in the copy is intangible property owned by the author. A copy is a copy, the work is the work, and never the twain shall meet. In Timothy Vernor’s case, however, the publisher of the AutoCad software argued that it never actually sold the copies Vernor bought, so there was no “first sale” for copyright purposes. Under the software publisher’s logic, which the Ninth Circuit adopted in the case, both the copy and the intellectual property embodied in the copy were only licensed, and quite restrictively so, pursuant to the terms of a mass end user license agreement (EULA); nothing was ever sold, despite the retail transaction that put copies of the software into the hands of the initial purchaser, and despite the downstream transaction that put those copies into Timothy Vernor’s hands.

Existing copyright case law makes it clear that digital copies of works, even those stored only ephemerally in RAM, are “copies” within the meaning of the Copyright Act. Moreover, the the Copyright Act is clear on its face that there is a difference between ownership of a copy of a work and ownership of the work embodied in the copy. Decisions like the Ninth Circuit’s in Vernor v. Autodesk, however, permit copyright owners to conflate the copy and the work to the detriment of consumers in cases involving digital goods. Under Vernor, software copyright owners not only own the work embodied in every copy of a program they sell, they own every copy, too. Consumers are left with both empty pockets and empty hands.

As the transition from physical to streaming or cloud-based digital distribution continues, further divorcing copyrighted works from their traditional tangible embodiments, it will increasingly be the case that consumers do not own the information goods they buy (or, rather, think they’ve bought). Under the court’s decision in Vernor, all a copyright owner has to do to effectively repeal the statutory first sale doctrine is draft a EULA that (1) specifies that the user is granted a license; (2) significantly restricts the user’s ability to transfer the software; and (3) imposes notable use restrictions. Sad to say, it’s about as easy as falling off a log.

Decrees and Buses: How the Open Government Partnership Translates into Action in Brazil

The U.S. and Brazil teamed up to form an important global initiative, the Open Government Partnership (OGP). The project was launched by President Obama and President Dilma Rousseff right before the General Assembly of the United Nations this year (which by the way, was opened for the first time by a woman, President Rousseff).

The initiative outlines four key commitments to be undertaken by participating governments: a) increase the availability of information about government activities; b) support civic participation; c) implement the highest standards of professional integrity; d) increase access to new technologies for openness and accountability. Until September 20 the OGP declaration had been endorsed by Indonesia, Mexico, Norway, Philippines, South Africa, UK, US, and Brazil.

It is doubtless a great initiative, which has been supported by many NGO’s and other civil society organizations worldwide. However, a question remains about how the initiative will be translated into concrete action at each participating country. One criticism often heard in Brazil is that the initiative has not be so broadly publicized internally as it has been publicized internationally.

A few reasons might help explain that. One is the OGP´s commitment that requires governments to “increase the availability of information about government activities”. Brazil still does not have a “freedom of information act”. Citizens seeking for government information have to navigate through several different pieces of legislation, none of them providing a comprehensive and satisfactory solution. The very few avenues that exist for access to information include class action laws (“ação civil pública” and “ação popular”), and the traditional writ of mandamus. Neither of them is really practical, nor accessible to the common citizen.

In the meantime, congress has been discussing since April 2010 a draft bill that would truly enact an effective freedom of information law in the country. Named “PLC 41/2010”, it quickly proved itself a controversial proposal. Members of Senate, most notably Senator Fernando Collor, resorted to procedural speed bumps in order to slow down the project discussion, to the point that it has been currently halted at the Senate. Senator Collor also proposed a substitute version to the text, which would make the law basically toothless.

One of the reasons for the controversy is that it would grant access to documents from the military government, which ruled Brazil from 1964 until 1985. The substitute version presented by Senator Collor includes exceptions in the law that would virtually create “eternal confidentiality” for certain documents. That proposal has been harshly criticized by both civil society and the press, but to no avail. The substitute version still has to be voted, and the “eternal confidentiality” can actually end up being incorporated in the text. The final vote might take months, or even years to happen. And Brazil will remain without a freedom of information law until them, in spite of its commitments under the OGP.

That is not, however, the end of the story. Fully aware that Congress might take time to enact the law, President Rousseff was quick enough to issue a federal decree establishing an array of provisions in support to open government. The Decree was enacted in September 15th, 2011, right in time for the UN General Assembly and the launch of the OGP. The president exercised her powers granted under Article 84 of the Brazilian Constitution, whereby the president has the authority to provide for the “organization and structure of federal administration, in the cases where there is neither increase of expenses nor creation or extinction of public agencies.”

The Decree establishes a set of principles to promote of open government, and creates an inter-ministerial committee (called GICA), with the mandate to propose and coordinate open government initiatives inside the federal government. The Decree does get to the point of creating concrete steps or goals that must be implemented, but it certainly creates a framework in which these concrete steps might (or not) emerge.

Being a federal decree, it is binding only to the federal government. However, other state and city governments have been adopting policies consistent with the commitments of the OGP. One example is the city of Sao Paulo that recently enacted a decree mandating that every learning material produced by the city must be licensed and made available under an open license, such as Creative Commons. That is just one example of the growing Open Educational Resources movement in Brazil, and the strength of civil society in pushing for an open government agenda much before OGP took place.

One of my favorite examples, however, is the “Hacker Bus” (“Onibus Hacker”), an initiative by a group of hacktivists called Transparencia Hacker. They had been pushing for the open government agenda for years. Tired of being stood up by state and city government officers, they decided to resort to Catarse, a Brazilian version for Kickstarter. They asked money to buy an old bus that would then travel around Brazil promoting meetings between the vagrant group of hakers and city and state government authorities. They quickly raised R$58,000 (approximately $32,000) and the bus has been acquired. The group is in the final preparations to start their hacker trip, and it is expected to increase visibility to the commitments undertaken by Brazil under the OGP.

In short, the Open Government agenda in Brazil is not a new one, and it certainly sounds like an expected development the fact that Brazil and the U.S. are co-chairing the Open Government Partnership. There will be more speed bumps ahead, either in the way of Congress or in the way of the hacker bus, but at least both seem to be bearing the right direction.

UPDATE: The Brazilian Freedom of Information Law was passed on October 25th, rejecting the “eternal confidentiality” articles and the substitute version prepared by Senator Collor, therefore sticking to the original (and better) text. It now remains to see how the law will be actually implemented, and if access to public information will become an effective tangible right for most citizens.

Corruption Bureau assigns fox to guard henhouse

Recently I wrote about my discovery that someone erased evidence on an election computer in Cumberland County, NJ. After something went wrong in a Primary Election in June 2011, the Superior Court (the Hon. David E. Krell) had ordered the County Board of Elections to make the computer available for me (the Plaintiffs’ expert) to examine.

When I examined the computer on August 17, among those watching me were the County Administrator of Elections (Lizbeth Hernandez), the Director of the New Jersey Division of Elections (Robert Giles), and a Deputy Attorney General of the State of New Jersey (George Cohen). This is quite a lot of firepower for reviewing a rather small election (43 votes cast in total).

In my examination of the computer, I noticed that files and logs were erased on the day before. I notified the Court, and within a few days an IT specialist employed by the county wrote, in an affidavit, that he had been asked by the County Administrator of Elections to examine the computer the day before my own examination, and at that time he erased the files and cleared the logs.

We do not know exactly what motivated Ms. Hernandez to ask the IT specialist to fiddle with the computer. The IT specialist himself says “I was asked by Lizbeth Hernandez to determine the date the hardening process was applied to the laptop.” Why is this date important? Back in 2010, a different judge of the Superior Court (the Hon. Linda R. Feinberg) had ordered the State to secure the computers used in conduction elections by applying these “hardening guidelines.” Mr. Giles was the one responsible for making sure the State (and all its Counties) complied with this order, more than a year ago. In August 2011, did Mr. Giles ask Ms. Hernandez whether the “hardening guidelines” had been applied? Perhaps these election officials were concerned that I might discover something about late compliance, or noncompliance, with Judge Feinberg’s order.

That is, the IT specialist’s affidavit points to concern about whether Mr. Giles had effectively brought New Jersey (including Cumberland County) into compliance; by erasing the logs and temporary files, he erased evidence about compliance or noncompliance.

Judge Krell, down in Cumberland County, does not like people tampering with evidence in the cases that come before him. On September 9 he referred the possible evidence-tampering to the prosecutor, that is, to the NJ Attorney General’s office. As I described in “Will the NJ Attorney General Investigate the NJ Attorney General,” the Plaintiffs doubted that the AG would do a real investigation.

Judge Krell’s referral was directed to Christine Hoffman, Chief of the Corruption Bureau of the Office of the Attorney General. On September 20, 2011, Ms. Hoffman wrote in an official letter, “the Division of Criminal Justice will not pursue criminal charges at this time. This matter is being forwarded to your office for your review and whatever action you deem appropriate.”

And to whom is this letter addressed? To Mr. Robert Giles, Director, Division of Elections. This is like asking the fox to investigate whether proper security measures have been installed at the henhouse. Does this instill confidence in the integrity of elections in New Jersey?

Plaintiffs have asked that Judge Krell assign a special master to investigate all irregularities associated with the June 8, 2011 primary election, including the erasure of the information concerning hardening guidelines. The recent turn of events shows why an independent investigation should take place in Cumberland County.

Open Access to Scholarly Publications at Princeton

In its September 2011 meeting, the Faculty of Princeton University voted unanimously for a policy of open access to scholarly publications:

“The members of the Faculty of Princeton University strive to make their publications openly accessible to the public. To that end, each Faculty member hereby grants to The Trustees of Princeton University a nonexclusive, irrevocable, worldwide license to exercise any and all copyrights in his or her scholarly articles published in any medium, whether now known or later invented, provided the articles are not sold by the University for a profit, and to authorize others to do the same. This grant applies to all scholarly articles that any person authors or co-authors while appointed as a member of the Faculty, except for any such articles authored or co-authored before the adoption of this policy or subject to a conflicting agreement formed before the adoption of this policy. Upon the express direction of a Faculty member, the Provost or the Provost’s designate will waive or suspend application of this license for a particular article authored or co-authored by that Faculty member.

“The University hereby authorizes each member of the faculty to exercise any and all copyrights in his or her scholarly articles that are subject to the terms and conditions of the grant set forth above. This authorization is irrevocable, non-assignable, and may be amended by written agreement in the interest of further protecting and promoting the spirit of open access.”

Basically, this means that when professors publish their academic work in the form of articles in journals or conferences, they should not sign a publication contract that prevents the authors from also putting a copy of their paper on their own web page or in their university’s public-access repository.

Most publishers in Computer Science (ACM, IEEE, Springer, Cambridge, Usenix, etc.) already have standard contracts that are compatible with open access. Open access doesn’t prevent these publishers from having a pay wall, it allows other means of finding the same information. Many publishers in the natural sciences and the social sciences also have policies compatible with open access.

But some publishers in the sciences, in engineering, and in the humanities have more restrictive policies. Action like this by Princeton’s faculty (and by the faculties at more than a dozen other universities in 2009-10) will help push those publishers into the 21st century.

The complete report of the Committee on Open Access is available here.