May 17, 2024

Comcast Gets Slapped, But the FCC Wisely Leaves its Options Open

The FCC’s recent Comcast action—whose full text is unavailable as yet, though it was described in a press release and statements from each comissioner—is a lesson in the importance of technological literacy for policymaking. The five commissioners’ views, as reflected in their statements, are strongly correlated to the degree of understanding of the fact pattern that each commissioner’s statement reveals. Both dissenting commissioners, it turns out, materially misunderstood the technical facts on which they assert their decisions were based. But the majority, despite technical competence, avoided a bright line rule—and that might itself turn out to be great policy.

Referring to what she introduces as the “BitTorrent-Comcast controversy,” dissenting Commissioner Tate writes that after the FCC began to look into the matter, “the two parties announced on March 27 an agreement to collaborate in managing web traffic and to work together to address network management and content distribution.” Where private parties can agree among themselves, Commissioner Tate sensibly argues, regulators ought to stand back. But as Ed and others have pointed out before, this has never been a two-party dispute. BitTorrent, Inc., which negotiated with Comcast, doesn’t have the power to redefine the open BitTorrent protocol whose name it shares. Anyone can write client software to share files using today’s version of the Bittorrent protocol – and no agreement between Comcast and BitTorrent, Inc. could change that. Indeed, if the protocoal were modified to buy overall traffic reductions by slowing downloads for individual users, one might expect many users to decline to switch. For this particular issue to be resolved among the parties, Comcast would have to negotiate with all (or at least most of) the present and future developers of Bittorrent clients. A private or mediated resolution among the primary actors involved in this dispute has not taken place and isn’t, as far as I know, currently being attempted. So while I share Ms. Tate’s wise preference for mediation and regulatory reticence, I don’t think her view in this particular case is available to anyone who fully understands the technical facts.

The other dissenting commissioner, Robert McDowell, shares Ms. Tate’s confusion about who the parties to the dispute are, chastising the majority for going forward after Comcast and BitTorrent, Inc. announced their differences settled. He’s also simply confused about the technology, writing that “the vast majority of consumers” “do not use P2P software to watch YouTube” when (a) YouTube isn’t delivered over P2P software, so its traffic numbers don’t speak to the P2P issue and (b) YouTube is one of the most popular sites on the web, making it very unlikely that the “vast majority of consumers” avoid the site. Likewise, he writes that network management allows companies to provide “online video without distortion, pops, and hisses,” analog problems that aren’t faced by digital media.

The majority decision, in finding Comcast’s activities collectively to be over the line from “reasonable network management,” leaves substantial uncertainty about where that line lies, which is another way of saying that the decision makes it hard for other ISPs to predict what kinds of network management, short of what Comcast did, would prompt sanctions in the future. For example, what if Comcast or another ISP were to use the same tools only to target BitTorrent files that appear, after deep packet inspection, to violate copyright? The commissioners were at pains to emphasize that networks are free to police their networks for illegal content. But a filter designed to impede transfer of most infringing video would be certain to generate a significant number of false positives, and the false positives (that is, transfers of legal video impeded by the filter) would act as a thumb on the scales in favor of traditional cable service, raising the same body of concerns about competition that the commissioners cite as a background factor informing their decision to sanction Comcast. We don’t know how that one would turn out.

McDowell’s brief highlights the ambiguity of the finding. He writes: “This matter would have had a better chance on appeal if we had put the horse before the cart and conducted a rulemaking, issued rules and then enforced them… The majority’s view of its ability to adjudicate this matter solely pursuant to ancillary authority is legally deficient as well. Under the analysis set forth in the order, the Commission apparently can do anything so long as it frames its actions in terms of promoting the Internet or broadband deployment.”

Should the commissioners have adopted a “bright line” rule, as McDowell’s dissent suggests? The Comcast ruling’s uncertainty guarantees a future of envelope-pushing and resource intensive, case-by-case adjudication, whether in regulatory proceedings or the courts. But I actually think that might be the best available alternative here. It preserves the Commission’s ability to make the right decision in future cases without having to guess, today, what precise rule would dictate those future results. (On the flip side, it also preserves the Commission’s ability to make bad choices in the future, especially if diminished public interest in the issue increases the odds of regulatory capture.) If Jim Harper is correct that Martin’s support is a strategic gambit to tie the issue up while broadband service expands, this suggests that Martin believes, as I do, that uncertainty about future interventions is a good way to keep ISPs on their best behavior.

Could Too Much Transparency Lead to Sunburn?

On Tuesday, the Houston Chronicle published a story about the salaries of local government employees. Headlined “Understaffing costs Houston taxpayers $150 million in overtime,” it was in many respects a typical piece of local “enterprise” journalism, where reporters go out and dig up information that the public might not already be aware is newsworthy. The story highlighted short staffing in the police department, which has too few workers for all the protection it is required to provide the citizens of Houston.

The print story used summaries and cited a few outliers, like a police sergeant who earned $95,000 in overtime. But the reporters had much more data: using Texas’s strong Public Information Act, they obtained electronic payroll data on 81,000 local government employees—essentially the entire workforce. Rather than keep this larger data set to themselves, as they might have done in a pre-Internet era, they posted the whole thing online. The notes to the database say that the Chronicle obtained even more information than it displays, and that before republishing the data, the newspaper “lumped together” what it obliquely descibes as “wellness and termination pay” into each employee’s reported base salary.

In a related blog post, Chronicle staffer Matt Stiles writes:

The editors understand this might be controversial. But this information already is available to anyone who wants to see it. We’re only compiling it in a central location, and following a trend at other news organizations publishing databases. We hope readers will find the information interesting, and, even better, perhaps spot some anomalies we’ve missed.

The value proposition here seems plausible: Among the 81,000 payroll records that have just been published, there very probably are news stories of legitimate public interest, waiting to be uncovered. Moreover (given that the Chronicle, like everyone else in the news business, is losing staff) it’s likely that crowdsourcing the analysis of this data will uncover things the reporting staff would have missed.

But it also seems likely that this release of data, by making it overwhelmingly convenient to unearth the salary of any government worker in Houston, will have a raft of side effects—where by “side” I mean that they weren’t intended by the Chronicle. For example, it’s now easy as pie for any nonprofit that raises funds from public employees in Houston to get a sense of the income of their prospects. Comparing other known data, such as approximate home values or other visible spending patterns, with information about salary can allow inferences about other sources of income. In fact, you might argue that this method—researching and linking the home value for every real estate transaction related to a city worker, and combining this data with salary information—would be an extraordinary screening mechanism for possible corruption, since those who buy above what their salary would suggest they should be able to afford must have additional income, and corruption is presumably one major reason why (generally low-paid) government workers are sometimes able to live beyond their apparent means.

More generally, it seems like there is a new world of possible synergies opened up by the wide release of this information. We almost certainly haven’t thought of all the consequences that will turn out, in retrospect, to be serious.

Houston isn’t the first place to try this—it turns out that the salaries of faculties at state schools are often quietly available for download as well, for example—but it seems to highlight a real problem. It may be good for the salaries of all public employees to be a click away, but the laws that make this possible generally weren’t passed in the last ten years, and therefore weren’t drafted with the web in mind. The legislative intent reflected in most of our current statutes, when a piece of information is statutorily required to be publicly available, is that citizens should be able to get the information by obtaining, filling out, and mailing a form, or by making a trip to a particular courthouse or library. Those small obstacles made a big difference, as their recent removal reveals: Information that you used to need a good reason to justify the cost of obtaining is now worth retrieving for the merest whim, on the off chance that it might be useful or interesting. And massive projects that require lots of retrieval, which used to be entirely impractical, can now make sense in light of any of a wide and growing range of possible motivations.

Put another way: As technology evolves, the same public information laws create novel and in some cases previously unimaginable levels of transparency. In many cases, particularly those related to the conduct of top public officials, this seems to be a clearly good thing. In others, particularly those related to people who are not public figures, it may be more of a mixed blessing or even an outright problem. I’m reminded of the “candidates” of ancient Rome—the Latin word candidatus literally means “clothed in white robes,” which would-be officeholders wore to symbolize the purity and fitness for office they claimed to possess. By putting themselves up for public office, they invited their fellow citizens to hold them to higher standards. This logic still runs strong today—for example, under the Supreme Court’s Sullivan precedent, public figures face a heightened burden if they try to sue the press for libel after critical coverage.

I worry that some kinds of progress in information technology are depleting a kind of civic ozone layer. The policy solutions here aren’t obvious—one shudders to think of a government office with the power to foreclose new, unforeseen transparencies—but it at least seems like something that legislators and their staffs ought to keep an eye on.

Viacom, YouTube, and the Dangerous Assembly of Facts

On July 2nd, Viacom’s lawsuit against Google’s YouTube unit saw a significant ruling, potentially troubling for user privacy. Viacom asked for, and judge Louis L. Stanton ordered Google to turn over, the logs of each viewing of all videos in the YouTube database, showing the username and IP address of the user who was viewing the video, a timestamp, and a code identifying the video. The judge found that Viacom “need[s] the data to compare the relative attractiveness of allegedly infringing videos with that of non-infringing videos.” The fraction of views that involve infringing video bears on Viacom’s claim that Google should have vicarious copyright liability–if the infringing videos appear to be an important draw for YouTube users, this implies a financial benefit to Google from the infringement, which would weigh in favor of a claim of vicarious liability.

As Doug Tygar has observed, the judge’s optimistic belief that disclosure of these logs won’t harm privacy seems to be based in part on the conflicting briefs of the parties. Viacom, desiring the logs, told the judge that “the login ID is an anonymous pseudonym that users create for themselves when they sign up with YouTube” which without more data “cannot identify specific individuals.” After quoting this claim and noting that Google did not refute it, the judge goes on to quote a Google employee’s blog post arguing that “in most cases, an IP address without additional information cannot” identify a particular user.

Each of these claims–first, that the login IDs of users are anonymous pseudonyms, and second, that IP addresses alone don’t suffice to identify individuals–is debatable. I haven’t reviewed the briefs that led Judge Stanton to believe each of the assertions. I suppose that his conclusions are reasonable in light of the material presented. It might be the case that the briefs should have led him to a different conclusion. Then again, as the blog post quoted above suggests, Google has at times found itself downplaying the privacy risks associated with certain data. A victory in this argument, causing the judge to take a more expansive view of the possible privacy harms, might have been a mixed blessing for Google in the longer run.

In any case, when he combined the two claims to compel the turnover of the logs, the judge made a significant mistake of his own. Agreeing for the sake of argument that login IDs alone don’t compromise privacy, and that IP addresses alone also don’t compromise privacy, it doesn’t follow that the two combined are equally innocuous. Earlier cases like the AOL debacle have shown us that information that may seem privacy-safe in isolation can be privacy-compromising when it is combined. The fact of combination–the fact that some viewing by a particular login ID happened at a certain IP address, and conversely that a viewing from a particular IP address occurred under the login of a particular user–is itself a potentially important further piece of information. If the judge thought about this fact–if he thought about the further privacy risk involved in the combination of IPs and login IDs–I couldn’t find any evidence of such consideration in his ruling.

Google wants to be permitted to modify the data to reduce the privacy risk before handing it over to Viacom, but it’s not yet clear what agreement if any the parties will reach that would do more to protect privacy that Judge Stanton’s ruling requires. It’s also not yet apparent exactly how the judge’s protective order will be constructed. But if the logs are turned over unaltered, as they may yet be, the result could be significant risk: YouTube’s users would then face extreme privacy harm in the event that the data were to leak from Viacom’s possession.

[As always, this post is the opinion of the author (David Robinson) only.]

Newspapers' Problem: Trouble Targeting Ads

Richard Posner has written a characteristically thoughtful blog entry about the uncertain future of newspapers. He renders widespread journalistic concern about the unwieldy character of newspapers into the crisp economic language of “bundling”:

Bundling is efficient if the cost to the consumer of the bundled products that he doesn’t want is less than the cost saving from bundling. A particular newspaper reader might want just the sports section and the classified ads, but if for example delivery costs are high, the price of separate sports and classified-ad “newspapers” might exceed that of a newspaper that contained both those and other sections as well, even though this reader was not interested in the other sections.

With the Internet’s dramatic reductions in distribution costs, the gains from bundling are decreased, and readers are less likely to prefer bundled products. I agree with Posner that this is an important insight about the behavior of readers, but would argue that reader behavior is only a secondary problem for newspapers. The product that newspaper publishers sell—the dominant source of their revenues—is not newspapers, but audiences.

Toward the end of his post, Posner acknowledges that papers have trouble selling ads because it has gotten easier to reach niche audiences. That seems to me to be the real story: Even if newspapers had undiminished audiences today, they’d still be struggling because, on a per capita basis, they are a much clumsier way of reaching readers. There are some populations, such as the elderly and people who are too poor to get online, who may be reachable through newspapers and unreachable through online ads. But the fact that today’s elderly are disproportionately offline is an artifact of the Internet’s novelty (they didn’t grow up with it), not a persistent feature of the marektplace. Posner acknoweldges that the preference of today’s young for online sources “will not change as they get older,” but goes on to suggest incongruously that printed papers might plausibly survive as “a retirement service, like Elderhostel.” I’m currently 26, and if I make it to 80, I very strongly doubt I’ll be subscribing to printed papers. More to the point, my increasing age over time doesn’t imply a growing preference for print; if anything, age is anticorrelated with change in one’s daily habits.

As for the claim that poor or disadvantaged communities are more easily reached offline than on, it still faces the objection that television is a much more efficient way of reaching large audiences than newsprint. There’s also the question of how much revenue can realistically be generated by building an audience of people defined by their relatively low level of purchasing power. If newsprint does survive at all, I might expect to see it as a nonprofit service directed at the least advantaged. Then again, if C. K. Prahalad is correct that businesses have neglected a “fortune at the bottom of the pyramid” that can be gathered by aggregating the small purchases of large numbers of poor people, we may yet see papers survive in the developing world. The greater relative importance of cell phones there, as opposed to larger screens, could augur favorably for the survival of newsprint. But phones in the developing world are advancing quickly, and may yet emerge as a better-than-newsprint way of reading the news.

New bill advances open data, but could be better for reuse

Senators Obama, Coburn, McCain, and Carper have introduced the Strengthening Transparency and Accountability in Federal Spending Act of 2008 (S. 3077), which would modify their 2006 transparency act. That first bill created USASpending.gov, a searchable web site of government outlays. USASpending.gov—which was based on software developed by OMB Watch and the Sunlight Foundation—allows end users to search across a variety of criteria. It has begun offering an API, an interface that lets developers query the data and display the results on their own sites. This allows a kind of reuse, but differs significantly from the approach suggested in our recent “Invisible Hand” paper. We urge that all the data be published in open formats. An API delivers search results, but that makes the search interface itself very important: having to work through an interface sometimes limits developers from making innovative, unforeseen uses of the data.

The new bill would expand the scope of information available via USASpending.gov, adding information about federal contracts, leases, and audit disputes, among other areas. But it would also elevate the API itself to a matter of statutory mandate. I’m all in favor of mandates that make data available and reusable, but the wording here is already a prime example of why technical standards are often better left to expert regulatory bodies than etched in statute:

” (E) programmatically search and access all data in a serialized machine readable format (such as XML) via a web-services application programming interface”

A technical expert body would (I hope) recognize that there is added value in allowing the data itself to be published so that all of it can be accessed at once. This is significantly different from the site’s current attitude; addressing the list of top contractors by dollar volume, the site’s FAQ says it “does not allow the results of these tables to be downloaded in delimited or XML format because they are not standard search results.” I would argue that standardizers of search results, whomever they may be, should not be able to disallow any data from being downloaded. There doesn’t necessarily need to be a downloadable table of top contractors, but it should be possible for citizens to download all the data so that they can compose such a table themselves if they so desire. The API approach, if it substitutes for making all the data available for download, takes us away from the most vibrant possible ecosystem of data reuse, since whenever government web sites design an interface (whether it’s a regular web interface for end users, or a code-level interface for web developers), they import assumptions about how the data will be used.

All that said, it’s easy to make the data available for download, and a straightforward additional requirement that could be added to the bill. And in any cause we owe a debt of gratitude to Senators Coburn, Obama, McCain and Carper for their pioneering, successful efforts in this area.

==

Update, June 12: Amended the list of cosponsors to include Sens. Carper and (notably) McCain. With both major presidential candidates as cosponsors, the bill seems to reflect a political consensus. The original bill back in 2006 had 48 cosponsors and passed unanimously.