November 25, 2024

Domain Names Can't Defend Themselves

Today, the Kentucky Supreme Court handed down an opinion in the saga of Kentucky vs. 141 Domain Names (described a while back here on this blog). Here’s the opinion.

This case is fascinating. A quick recap: Kentucky attempted a property seizure of 141 domain names allegedly involved in gambling on the theory that the domain names themselves constituted “gambling devices” under Kentucky law and were therefore illegal. The state held a forfeiture hearing where anyone with an interest in the “property” could show up to defend their interest in the property; otherwise, the State would order the registrars to transfer “ownership” of the domain names to Kentucky. No individual claiming that they own one of the domain names showed up. Litigation began when two industry associations (iMEGA and IGC) claimed to represent unnamed persons who owned these domain names (and another lawyer showed up during litigation claiming representation of one specific domain name).

The subsequent litigation gets a bit complicated; suffice it to say that the issue of standing was what got to the KY Supreme Court: could an association that claimed it represented an owner of a domain name affected in this action properly represent this owner in court without identifying that owner and that the owner was indeed the owner of an affected domain name?

The Kentucky Supreme Court said no, that there needs to be at least one identified individual owner that will suffer harm before the association can stand in stead, ruling,

Due to the incapacity of domain names to contest their own seizure and the inability of iMEGA and IGC to litigate on behalf of anonymous registrants, the Court of Appeals is reversed and its writ is vacated.

And on the issue of whether a piece of property can represent itself:

“An Internet domain name does not have an interest in itself any more than a piece of land is interested in its own use.”

Anyway, it would seem that the options for next steps include, 1) identifying at least one owner that would suffer harm, then motion back up to the Supreme Court (given that merits had been argued at the Appeals level), or 2) decide that the anonymity of domain name ownership in this case is more important than the fight over this very weird seizure of domain names.

As a non-lawyer, I wonder if it’s possible to represent an owner as a John Doe with an affidavit of ownership of an affected domain name submitted.

UPDATE (2010-03-19T00:07:07 EDT): Check the comments for why a John Doe strategy won’t work when the interest in anonymity is to avoid personal liability rather than free expression.

A weird bonus for people that have read this far: if I open the PDF of the opinion on my Mac in Preview.app or Skim.app (two PDF readers), the “SPORTSBOOK.COM” entry in the listing of the parties on the first page is hyperlinked. However, I don’t see this in Adobe Acrobat Pro or Reader. Seems like the KY Supreme Court is, likely inadvertently, linking to one of the 141 domain names. Of course, Preview.app and Skim.app might be sharing the same library that causes this one URL to be linked… I’m not good-enough of a PDF sleuth to figure it out.

Round 2 of the PACER Debate: What to Expect

The past year has seen an explosion of interest in free access to the law. Indeed, something of a movement appears to be coalescing around the issue, due in no small part to the growing Law.gov effort (see the latest list of events). One subset of this effort is our work on PACER, the online document access system for the federal courts. We contend that access to electronic court records should be free (see posts from me, Tim, and Harlan). Our RECAP project helps make some of these documents more accessible, and has gained adoption far above our expectations. That being said, RECAP doesn’t solve the fundamental problem: the federal government needs to publish the full public record for free online. Today, this argument came from an unlikely source, the FCC’s National Broadband Plan.

RECOMMENDATION 15.1: the primary legal documents of the federal government should be free and accessible to the public on digital platforms. […]

– For the Judicial branch, this should apply to all judicial opinions.

[…] Finally, all federal judicial decisions should be accessible for free and made publicly available to the people of the United States. Currently, the Public Access to Court Electronic Records system charges for access to federal appellate, district and bankruptcy court records.[7] As a result, U.S. federal courts pay private contractors approximately $150 million per year for electronic access to judicial documents.[8] [Steve note: The correct figure is $150m over 10 years. However it is quite possible that the federal government as a whole spends $150m or more per year for access to case materials.] While the E-Government Act has mandated that this system change so that this information is as freely available as possible, little progress has been made.[9] Congress should consider providing sufficient funds to publish all federal judicial opinions, orders and decisions online in an easily accessible, machine-readable format.

[7] See Public Access To Court Electronic Records—Overview, http://pacer.psc.uscourts.gov/pacerdesc.html (last visited Jan. 7, 2010).
[8] Carl Malmud, President and CEO, Public.Resource. Org., By the People, Address at the Gov 2.0 Summit, Washington, D.C. 25 (Sept. 10, 2009), available at http://resource.org/people/3waves_cover.pdf
[9] See Letter from Sen. Joseph I. Lieberman to Carl Malamud, President and CEO, Public.Resources.Org (Oct. 13, 2009), available at http://bulk.resource.org/courts.gov/foia/gov.senate.lieberman_20091013_from.pdf

This issue is outside of the Commission’s direct jurisdiction, but the Broadband Plan is intended as a blueprint for the federal government as a whole. In that context, the notion of ensuring that primary legal materials are available for free online fits perfectly with a broader effort to make government digitally accessible. In a similar vein, a bill was introduced today by Rep. Israel. The Public Online Information Act, backed by the Sunlight Foundation, creates a new federal advisory committee to advise all three branches of government on how to make government information available online for free.

To establish an advisory committee to issue nonbinding government-wide guidelines on making public information available on the Internet, to require publicly available Government information held by the executive branch to be made available on the Internet, to express the sense of Congress that publicly available information held by the legislative and judicial branches should be available on the Internet, and for other purposes.

These two developments are the first of what I expect to be many announcements in the coming months, coming from places like the transparency caucus. These announcements will share a theme — there is a growing mandate for universal free access to government information, and judicial information is a key component of that mandate. These requirements will increasingly go to the heart of full free access to the public record, and will reveal the discrepancies between different branches in this regard.

The FCC’s language doesn’t quite get everything right. Most notably, the language focuses on opinions even though there are other components of the record that are key to the public’s understanding of the law. Opinions on PACER are already theoretically free, but the kludgy system for accessing them doesn’t include all of the opinions, isn’t indexable by search engines, and only gives a minimal amount of information about the case that each is a part of. Furthermore, the docket text required to understand the context, and the search functionality required to find the opinions both require a fee. Subsequent calls for free access to case materials will have to be more holistic than the opinions-only language of the Broadband Report.

The POIA language is also a step forward. A federal advisory committee is a good thing in the context of a branch that is more accustomed to the adversarial process than notice-and-comment. However, we will need much more concrete requirements before we will have achieved our goals.

In the context of these announcements, the Administrative Office of the Courts made their own announcement today. The Judicial conference has voted in favor of two measures that make incremental improvements on the current pay-wall model of access to PACER.

  • Adjust the Electronic Public Access fee schedule so that users are not billed unless they accrue charges of more than $10 of PACER usage in a quarterly billing cycle, in effect quadrupling the amount of data available without charge. Currently, users are not billed until their accounts total at least $10 in a one-year period.
  • Approve a pilot in up to 12 courts to publish federal district and bankruptcy court opinions via the Government Printing Office’s Federal Digital System (FDsys) so members of the public can more easily search across opinions and across courts.

These are minor tweaks on a fundamentally limited system. Don’t get me wrong — a world with these changes is better than a world without. It is slightly easier to avoid spending more than $10 in a given quarter than in a given year, but it’s nevertheless likely that you will do so unless you know exactly what you are looking for and retrieve only a few documents. It’s also good to establish precedent for GPO publishing case materials, but that doesn’t require a limited trial that could end in bureaucratic quagmire. The GPO can handle publishing many documents, and any reasonably qualified software engineer could figure out how to deliver them in short order. What’s more, the courts could provide universal free public access today, with zero engineering work: offer a single PACER login that is never billed or, better yet, just stop billing all accounts.

The next round of the PACER debate will be over whether or not we make a fundamental change in access to federal court records, or if we concede minor tweaks and call it a day.

Best Practices for Government Datasets: Wrap-Up

[This is the fifth and final post in a series on best practices for government datasets by Harlan Yu and me. (previous posts: 1, 2, 3, 4)]

For our final post in this series, we’ll discuss several issues not touched on by earlier posts, including data signing and the use of certain non-text file formats. The relatively brief discussions of these topics should not be interpreted as an indicator of their importance. The topics simply did not fit cleanly into earlier posts.

One significant omission from earlier posts is the issue of data signing with digital signatures. Before discussing this issue, let’s briefly discuss what a digital signature is. Suppose that you want to email me an IOU for $100. Later, I may want to prove that the IOU came from you—it’s of little value if you can claim that I made it up. Conversely, you may want the ability to prove whether the document has been altered. Otherwise, I could claim that you owe me $100,000.

Digital signatures help in proving the origin and authenticity of data. These signatures require that you create two related big numbers, known as keys: a private signing key (known only by you) and a public verification key. To generate a digital signature, you plug the data and your signing key into a complicated formula. The formula spits out another big number known a digital signature. Given the signature and your data, I can use the verification key to prove that the data came unmodified from you. Similarly, nobody can credibly sign modified data without your signing key—so you should be very careful to keep this key a secret.

Developers may want to ensure the authenticity of government data and to prove that authenticity to users. At first glance, the solution seems to be a simple application of digital signatures: agencies sign their data, and anyone can use the signatures to authenticate an agency’s data. In spite of their initially steep learning curve, tools like GnuPG provide straightforward file signing. In practice, the situation is more complicated. First, an agency must decide what data to sign. Perhaps a dataset contains numerous documents. Developers and other users may want signatures not only for the full dataset but also for individual documents in it.

Once an agency knows what to sign, it must decide who will perform the signing. Ideally, the employee producing the dataset would sign it immediately. Unfortunately, this solution requires all such employees to understand the signature tools and to know the agency’s signing key. Widespread distribution of the signing key increases the risk that it will be accidentally revealed. Therefore, a central party is likely to sign most data. Once data is signed, an agency must have a secure channel for delivering the verification key to consumers of the data—users cannot confirm the authenticity of signed data without this key. While signing a given file with a given key may not be hard, surrounding issues are more tricky. We offer no simple solution here, but further discussion of this topic between government agencies, developers, and the public could be useful for all parties.

Another issue that earlier posts did not address is the use of non-text spreadsheet formats, including Microsoft Excel’s XLS format. These formats can sometimes be useful because they allow the embedding of formulas and other rich information along with the data. Unfortunately, these formats are far more complex than raw text formats, so they present a greater challenge for automated processing tools. A comma-separated value (CSV) file is a straightforward text format that contains values separated by line breaks and commas. It provides an alternative to complicated spreadsheet formats. For example, the medal count from the 2010 Winter Olympics in CSV would be:

  Country,Gold,Silver,Bronze,Total
  USA,9,15,13,37
  Germany,10,13,7,30
  Canada,14,7,5,26
  Norway,9,8,6,23
  ...

Fortunately, the release of data in one format does not preclude its release in another format. Most spreadsheet programs provide an option to save data in CSV form. Agencies should release spreadsheet data in a textual format like CSV by default, but an agency should feel free to also release the data in XLS or other formats.

Similarly, agencies will sometimes release large files or groups of files in a compressed or bundled format (for example, ZIP, TAR, GZ, BZ). In these cases, agencies should prominently specify where users can freely obtain software and instructions for extracting the data. Because so many means of compressing and bundling files exist, agencies should not presume that the necessary tools and steps are obvious from the data files themselves.

The rules suggested throughout this series should be seen as best practices rather than hard-and-fast rules. We are still in the process of fleshing out several of these ideas ourselves, and exceptional cases sometimes justify exceptional treatment. In unusual cases, an agency may need to deviate from traditional best practices, but it should carefully consider (and perhaps document) its rationale for doing so. Rules are made to be broken, but they should not be broken for mere expedience.

Our hope is that this series will provide agencies with some points to consider prior to releasing data. Because of Data.gov and the increasing traction of openness and transparency initiatives, we expect to see many more datasets enter the public domain in the coming years. Some agencies will approach the release of bulk data with minimal previous experience. While this poses a challenge, it also present an opportunity for committed agencies to institute good practices early, before bad habits and poor-quality legacy datasets can accumulate. When releasing new datasets, agencies will make numerous conscious and unconscious choices that impact developers. We hope to help agencies understand developers’ challenges when making these choices.

After gathering input from the community, we plan to create a technical report based on this series of posts. Thanks to numerous readers for insightful feedback; your comments have influenced and clarified our thoughts. If any FTT readers inside or outside of government have additional comments about this post or others, please do pass them along.

Correcting Errors and Making Changes

[This is the fourth post in a series on best practices for government datasets by Harlan Yu and me. (previous posts: 1, 2, 3)]

Even cautiously edited datasets sometimes contain errors, and even meticulously produced schemas require refinement as circumstances change. While errors or changes create inconvenience for developers, most developers appreciate and prepare for their inevitability. Agencies should strive to do the same. A well-developed strategy for fixes and changes can ease their burden on both developers and agencies.

When agencies release data, developers ideally will interact with it in creative new ways. Given datasets containing megabytes to gigabytes of data, novel uses will reveal previously unnoticed errors. Knowledge of these errors benefits the agency as well as other developers using the data, so agencies should take steps to encourage error reporting. Labels in a dataset allow developers to specify errors efficiently and unambiguously. An easy-to-find channel for reporting errors, such as a prominently provided email address or web form, is also critical. Tracking down the contact information of the person responsible for a dataset can be difficult, and a well-known channel reduces this barrier to feedback.

Upon learning of an issue in a dataset, an agency should correct the problem and release the corrected dataset in a timely manner. An important fact to keep in mind when correcting data is that numerous developers may have already downloaded and begun using the old flawed version. For these developers, even a minor modification can cause major issues if not done carefully. Agencies should think about two things: how they will make developers aware that the dataset has been modified and how they will change the dataset itself. The first point is sometimes ignored in spite of its importance. Not only should datasets contain version information, but agencies should also notify developers when the data that they rely on has changed. In particular, agencies should allow developers to subscribe to an email list or an RSS feed for specific datasets that details updates in a well-structured manner. These updates should clearly specify the dataset and version affected, a location where the updated dataset can be found, and a description of the changes to the dataset. When possible, these changes should be specified via a formal, structured description—for example, a diff output—as well as a brief prose explanation.

Correction of dataset contents should proceed cautiously. Suppose that an application allows user to comment on parts of a document. If labels are in a dataset are not maintained consistently across versions, the developer may need to painstakingly map comments from the old data to the corresponding parts of the new dataset. Issues like this can be mitigated through several practices. First, an agency should seek to preserve labels across versions of a dataset when possible (alternatively, in some cases an agency might wish to change the labels but provide a mapping to assist developers). For example, a dataset might aggregate numerous documents, and a minor change in one document should not necessarily change the labels for the other documents. Recall the side note from our previous post that labels should be separate from ordering information. Corrections to a dataset may add, remove, or reorder items. Detaching order from labels can help agencies ensure label consistency across dataset versions. In addition, the last post and its comments discussed whether agencies should provide a label that is separate from its internally used agency label. This separation allows labels to remain consistent even when Subsection X becomes Section Y based on the internal agency labels. Note that these points about consistent labeling can be useful whenever a dataset could have multiple versions: for example, consistent labeling might be beneficial across various versions of a bill.

Similarly, the structure that agencies use for datasets, the locations where the datasets are hosted, and other details of a dataset sometimes must change. Suppose that an agency releases various statistics each month. When the agency is asked to provide a new statistic, the new data may necessitate changes to the XML schema. Alternatively, the agency may decide to host data at the address “http://www.agency.gov/YEAR/MONTH/data.xml” rather than “http://www.agency.gov/MONTH-YEAR/data.xml,” causing issues for automated tools that periodically check for and download new data. To reduce the adverse impact of these changes on developers, agencies should provide detailed notice of the changes as early as possible. Early notice gives developers time to modify their tools. These notifications can occur via an email list or RSS feed providing details of the changes in a clear, consistent format.

The possibility of changes and their impact on developers should be taken into account at all stages of the data production process. Suppose an agency adds an element to a schema that specifies a unique individual, but the schema may someday need to specify a corporation instead. Although the agency should not speculatively add unnecessary elements to the schema, it should be mindful of possible changes when designing the rest of the schema. Various design choices may minimize the impact of a change if necessary later. Agencies should also avoid the urge to alter a schema dramatically each time it requires a minor change. A major overhaul—even when done to clean up the schema—may require equally dramatic changes in tools utilizing the data. To ensure that developers notice changes to XML schemas, both schema files and datasets should contain a prominent schema version number. If an agency changes the location where data is hosted, it should consider temporarily using aliases so that requests using old addresses automatically take you to the correct data. Once the old addresses are phased out, agencies should use a standard HTTP 404 status code to indicate that the requested data was not found at the specified location. Simply supplying a “Not Found” page without this standard code could make life harder for developers whose automated tools must instead parse this page.

When making changes, agencies should consider soliciting input directly from developers. Because the preferences of developers might not be obvious, this input can lead to choices that help developers without increasing the burden on agencies. In fact, developers may even come up with ideas that make life easier for an agency.

Our next and final post in this series will discuss a handful of additional issues for agencies to consider.

Labeling Dataset Contents

[This is the third post in a series on best practices for government datasets by Harlan Yu and me. (previous posts)]

When the government releases a dataset, citizens ideally will discuss the contents and supply educated feedback. The ability to reference facts and figures in a dataset supports a constructive dialog. Vague concerns are harder to articulate and address than ones citing specific paragraphs in a document. In this post, we’ll discuss why data labeling supports this goal, and when and how government agencies should uniquely label data inside a dataset for citability. As in the previous post, our focus will be on XML, though the lessons apply to other formats.

As our interactions with each other and with our government increasingly occur online, the need for precise communication has also increased. Open-government initiatives can give knowledge and voices to more citizens than ever before, but this can lead to an almost overwhelming quantity of discussion. Various technologies can help us to manage and make sense of this information, but these technologies are most effective with unambiguous data. For example, tools could sort citizens’ comments on a bill by section, but this task can be difficult unless the comments cite sections. One way to encourage citations is by placing tags in the dataset that citizens and open-government tools can easily reference.

The structure of XML implicitly enables referencing of elements in a sense. A citizen could cite the seventh “<PARARGRAPH>” element in the twenty-eighth “<DOCUMENT>” element in a dataset. Even ignoring how error-prone counting is for humans, reliance on this structure is not ideal. XML schemas can specify order for elements of different types but not the same type—a parser could validly retrieve <PARAGRAPH> elements of a document in any order (we’ll discuss in our next post why labels and ordering should be treated as two separate problems; our point here is only that element order should not be used as an implicit label). In addition, different parties may come up with different reference schemes in the absence of an explicit authoritative one. The agency creating a dataset might refer to the paragraph referenced above as Section XII of Document K6-2495, and another developer might refer to it as “<PARAGRAPH>” 147. An abundance of reference schemes can make it harder for government officials to understand citizens, harder for citizens to understand each other, and harder for developers to merge the function and output of their tools. Using an explicit common reference scheme avoids these issues.

Of course, different uses require different forms of labeling, and agencies cannot meet the desires of everyone. How can they decide where to add labels? Recall that our previous posts address the question of who should add what structure to a dataset. Agencies should use the answer as a guide for where to add labels, generally adding labels to all elements they create. If an agency breaks text up by paragraph, each paragraph should be citable; if it breaks text up by sentence, each sentence should be citable. Labels are fairly straightforward to add to elements in XML, so this rule imposes minimal additional work on agencies. Additional partitioning and labeling of data can be left to private parties. Some precedence already exists for private party involvement here: Citability.org is working to enable citation of government documents at a paragraph level.

When agencies add labels, they should strive to use the same reference schemes used internally. Unfortunately, labeling schemes utilizing Roman numerals, letters, or almost anything other than Arabic numerals (0, 1, 2, etc.) can be hard to process. For these cases, the agency should include two labels: an internal agency label and a numeric label. While this suggestion runs counter to our rule against redundancy, it makes the labels far easier to process and facilities easy translation between both schemes.

In general, however, the lessons from past posts should be kept in mind when labeling, including the points about avoiding redundancy: the label for Part 2 of a document should appear in element names and attributes (e.g., “<PART LABEL="2">[…]</PART>”) rather than text. Labels should uniquely identify an element among those with the same parent, but a label may not be necessary if an element’s type is unique among its siblings.

To make these recommendations more concrete, we end with an example. Consider the following document:

  Notice 2982:  Proposal to Increase Public Transit Fees

  Section I.  Budget Shortfall
  In fiscal year 2009, [...]
  Unless changes are made [...]

  Section II.  Decreasing the Deficit
  To compensate for [...]
  This relatively modest [...]

This document could be represented in a dataset as:

<DATASET>
  [...]
  <NOTICE LABEL="2982">
    <TITLE>Proposal to Increase Public Transit Fees</TITLE>
    <SECTION AGENCY_LABEL="I" LABEL="1">
      <TITLE>Budget Shortfall</TITLE>
      <PARAGRAPH LABEL="1">In fiscal year 2009, [...]</PARAGRAPH>
      <PARAGRAPH LABEL="2">Unless changes are made [...]</PARAGRAPH>
    </SECTION>
    <SECTION AGENCY_LABEL="II" LABEL="2">
      <TITLE>Decreasing the Deficit</TITLE>
      <PARAGRAPH LABEL="1">To compensate for [...]</PARAGRAPH>
      <PARAGRAPH LABEL="2">This relatively modest [...]</PARAGRAPH>
    </SECTION>
  </NOTICE>
  [...]
</DATASET>

Among other things, we can uniquely reference the notice (Notice 2982) and each paragraph (e.g., Notice 2982, Section II, paragraph 1).

In our next post, we’ll discuss how agencies can handle errors and make other changes while reducing the strain on developers.