February 20, 2018

Archives for 2011

Don't Regulate the Internet. No, Wait. Regulate the Internet.

When Congress considered net neutrality legislation in the form of the Internet Freedom Preservation Act of 2008 (H.R. 5353), representatives of corporate copyright owners weighed in to oppose government regulation of the Internet. They feared that such regulation might inhibit their private efforts to convince ISPs to help them enforce copyrights online through various forms of broadband traffic management (e.g., filtering and bandwidth shaping). “Our view,” the RIAA’s Mitch Bainwol testified at a Congressional hearing, “is that the marketplace is generally a better mechanism than regulation for addressing such complex issues as how to address online piracy, and we believe the marketplace should be given the chance to succeed.” And the marketplace presumably did succeed, at least from the RIAA’s point of view, when ISPs and corporate rights owners entered into a Memorandum of Understanding last summer to implement a standardized, six-strikes graduated response protocol for curbing domestic illegal P2P file sharing. Chalk one up for the market.

What, then, should we make of the RIAA’s full-throated support for the Senate’s pending PROTECT IP Act (S. 968) and its companion bill in the House, SOPA (H.R. 3261)? PROTECT IP and SOPA are bills that would regulate the technical workings of the Internet by requiring operators of domain name servers to block user access to “rogue websites”—defined in PROTECT IP as sites “dedicated to infringing activities”—by preventing the domain names for those sites from resolving to their corresponding IP addresses. In a recent RIAA press release on PROTECT IP, the RIAA’s Bainwol praised the draft legislation, asserting the need for—you guessed it—new government regulation of the Internet: “[C]urrent laws have not kept pace with criminal enterprises that set up rogue websites overseas to escape accountability.” So much, I guess, for giving the marketplace the chance to succeed.

As the Social Science Research Council’s groundbreaking 2011 report on global piracy concluded, the marketplace could succeed in addressing the problem of piracy beyond U.S. borders if corporate copyright owners were willing to address global disparities in the accessibility of legal digital goods. As the authors explain, “[t]he flood of legal media goods available in high-income countries over the past two decades has been a trickle in most parts of the world.” Looking at the statistics on piracy in the developing world from the consumption side rather than the production side, the SSRC authors assert that what developing markets want and need are “price and service innovations” that have already been rolled out in the developed world. Who is in a better position to deliver such innovations, through the global marketplace, than the owners of copyrights in digital entertainment and information goods? Why not give the marketplace another chance to succeed, particularly when the alternative presented is a radical policy intrusion into the fundamental operation of the Internet?

The RIAA’s political strategy in the war on piracy has been alternately to oppose and support government regulation of the Internet, depending on what’s expedient. I wonder if rights owners and the trade groups that represent them experience any sense of cognitive dissonance when they advocate against something at one moment and for it a little while later—to the same audience, on the same issue.

Is Insurance Regulation the Next Frontier in Open Government Data?

My friend Ray Lehman points to an intriguing opportunity to expand public access to government data: insurance regulation. The United States has a decentralized, state-based system for regulating the insurance industry. Insurance companies are required to disclose data on their premiums, claims, assets, and many other topics, to state regulators for each state in which they do business. These data are then shared with the National Association of Insurance Commissioners, a private, non-profit organization that combines it and then sells access to the database. Ray tells the story:

The major clients for the NAIC’s insurance data are market analytics firms like Charlottesville, Va.-based SNL Financial and insurance rating agency A.M. Best (Full disclosure: I have been, at different times, an employee at both firms) who repackage the information in a lucrative secondary market populated by banks, broker-dealers, asset managers and private investment funds. While big financial institutions make good use of the data, the rates charged by firms like Best and SNL tend to be well out of the price range of media and academic outlets who might do likewise.

And where a private stockholder interested in reading the financials of a company whose shares he owns can easily look up the company’s SEC filings, a private policyholder interested in, say, the reserves held by the insurer he has entrusted to protect his financial future…has essentially nowhere to turn.

However, Ray points out that the recently-enacted Dodd-Frank legislation may change that, as it creates a new Federal Insurance Office. That office will collect data from state regulators and likely has the option to disclose that data to the general public. Indeed, Ray argues, the Freedom of Information Act may even require that the data be disclosed to anyone who asks. The statute is ambiguous enough that in practice it’s likely to be up to FIO director Michael McRaith to decide what to do with the data.

I agree with Ray that McRaith should make the data public. As several CITP scholars have argued, free bulk access to government data has the potential to create significant value for the public. These data could be of substantial value for journalists covering the insurance industry and academics studying insurance markets. And with some clever hacking, it could likely be made useful for consumers, who would have more information with which to evaluate the insurance companies in their state.

ACM opens another hole in the paywall

Last month I wrote about Princeton University’s new open-access policy. In fact, Princeton’s policy just recognizes where many disciplines and many scholarly publishers were going already. Most of the important publication venues in Computer Science already have an open-access policy–that is, their standard author copyright contract permits an author to make copies of his or her own paper available on the author’s personal web site or institutional repository. These publishers include the Association for Computing Machinery (ACM), the Institute for Electrical and Electronics Engineers (IEEE), Springer Verlag (for their LNCS series of conference proceedings), Cambridge University Press, MIT Press, and others.

For example, the ACM’s policy states,

Under the ACM copyright transfer agreement, the original copyright holder retains … the right to post author-prepared versions of the work covered by ACM copyright in a personal collection on their own Home Page and on a publicly accessible server of their employer, and in a repository legally mandated by the agency funding the research on which the Work is based. Such posting is limited to noncommercial access and personal use by others, and must include this notice both embedded within the full text file and in the accompanying citation display as well:

“© ACM, YYYY. This is the author’s version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in PUBLICATION, {VOL#, ISS#, (DATE)} http://doi.acm.org/10.1145/nnnnnn.nnnnnn”

But now the ACM is trying something new; a mass mailing from ACM’s Director of Publications explains,

ACM has just launched a new referrer-linking service. It is called the ACM Author-Izer Service. In essence, ACM Author-Izer enables you to provide a free access to the definitive versions of your ACM articles permanently maintained by ACM in its Digital Library by embedding the links generated by this service in your personally maintained home-page bibliographies.

With widespread usage of this service, the need to post your author-prepared versions should be alleviated; automatic indexers will point to the article in the DL rather than alternative versions hosted elsewhere without the promise of being permanently maintained.

The ACM has not removed the author’s right to self-post copies of the articles, but clearly the publisher wants to discourage that, and to be the only source for content. Furthermore, authors can use this only if they buy in to the ACM’s “Author Profile” page, a feature that ACM has been pushing but that I suspect most authors don’t bother with. It’s an interesting strategy to capture links, or to reduce the number of copies floating around outside the control of the ACM archive. Whether it works may depend, in part, on how difficult it is for authors to use. I suspect most authors won’t bother, but if you want to see some Author-Ized links in action, click here and then click on “A Theory of Indirection via Approximation.” (I can’t link directly from this article, because the ACM permits this service from only one Web address.)

Unlike some newspapers, which are suffering badly in the Internet age, major nonprofit scholarly publishers such as the ACM are in good financial health, with a diverse array of activities and revenue sources: membership dues, conferences, refereed journals, magazines, paid job-advertisement web sites, and so on. Still, there is a lot of experimentation about how to survive as a publisher in the 21st century, and this appears to be the latest experiment.