November 21, 2024

2010 Predictions Scorecard

We’re running a little behind this year, but as we do every year, we’ll review the predictions we made for 2010. Below you’ll find our predictions from 2010 in italics, and the results in ordinary type. Please notify us in the comments if we missed anything.

(1) DRM technology will still fail to prevent widespread infringement. In a related development, pigs will still fail to fly.

We win again! There are many examples, but one that we predicted specifically is that HDCP was cracked. Guess what our first prediction for 2011 will be? Verdict: Right.

(2) Federated DRM systems, such as DECE and KeyChest, will not catch on.

Work on DECE (now renamed UltraViolet) continues to roll forward, with what appears to be broad industry support. It remains to be seen if those devices will actually work well, but the format seems to have at least “caught on” among industry players. We haven’t been following this market too closely, but given that KeyChest seems to mostly be mentioned as an also-ran in UltraViolet stories, its chances don’t look as good. Verdict: Mostly wrong.

(3) Content providers will crack down on online sites that host unlicensed re-streaming of live sports programming. DMCA takedown notices will be followed by a lawsuit claiming actual knowledge of infringing materials and direct financial benefits.

Like their non-live bretheren, live streaming sites like Justin.tv have received numerous DMCA takedown notices for copyrighted content. At the time of this prediction, we were unaware of the lawsuit against Ustream by a boxing promotional company, which began in August 2009. Nonetheless, the trend has continued. In the UK, there was an active game of cat-and-mouse between sports teams and live illegal restreaming sources for football (ahem: soccer) and cricket, which make much of their revenue on selling tickets to live matches. In some cases, a number of pubs were temporarily closed when their licenses were suspended in the face of complaints from content providers. In the US, Zuffa, the parent company for the mixed martial arts production company Ultimate Fighting Championship, sued when a patron at a Boston bar connected his laptop to one of the bar’s TVs to stream a UFC fight from an illicit site (Zuffa is claiming $640k in damages). In July, Zuffa subpoenaed the IP addresses of people uploading its content. And last week UFC sued Justin.tv directly for contributory and vicarious infringement, inducement, and other claims (RECAP docket). Verdict: Mostly right.

(4) Major newspaper content will continue to be available online for free (with ads) despite cheerleading for paywalls by Rupert Murdoch and others.

Early last year, the New York Times announced its intention to introduce a paywall in January 2011, and that plan still seems to be on track, but didn’t actually happen in 2010. The story is the same at the Philly Inquirer, which is considering a paywall but hasn’t put one in place. The Wall Street Journal was behind a paywall already. Other major papers, including the Los Angeles Times, the Washington Post, and USA Today, seem to be paywall-free. The one major paper we could find that did go behind a paywall is the Times of London went behind a paywall in July, with predictably poor results. Verdict: Mostly right.

(5) The Supreme Court will strike down pure business model patents in its Bilski opinion. The Court will establish a new test for patentability, rather than accepting the Federal Circuit’s test. The Court won’t go so far as to ban software patents, but the implications of the ruling for software patents will be unclear and will generate much debate.

The Supreme Court struck down the specific patent at issue in the case, but it declined to invalidate business method patents more generally. It also failed to articulate a clear new test. The decision did generate plenty of debate, but that went without saying. Verdict: Wrong.

(6) Patent reform legislation won’t pass in 2010. Calls for Congress to resolve the post-Bilski uncertainty will contribute to the delay.

Another prediction that works every year. Verdict: Right.

(7) After the upcoming rulings in Quon (Supreme Court), Comprehensive Drug Testing (Ninth Circuit or Supreme Court) and Warshak (Sixth Circuit), 2010 will be remembered as the year the courts finally extended the full protection of the Fourth Amendment to the Internet.

The Supreme Court decided Quon on relatively narrow grounds and deferred on the Fourth Amendment questions on electronic privacy, and the Ninth Circuit in Comprehensive Drug Testing dismissed the lower court's privacy-protective guidelines for electronic searches. However, the big privacy decision of the year was in Warshak, where the Sixth Circuit ruled strongly in favor of the privacy of remotely stored e-mail. Paul Ohm said of the decision: “It may someday be seen as a watershed moment in the extension of our Constitutional rights to the Internet.” Verdict: Mostly right.

(8) Fresh evidence will come to light of the extent of law enforcement access to mobile phone location-data, intensifying the debate about the status of mobile location data under the Fourth Amendment and electronic surveillance statutes. Civil libertarians will call for stronger oversight, but nothing will come of it by year’s end.

Even though we didn’t learn anything significant and new about the extent of government access to mobile location data, the debate around “cell-site” tracking privacy certainly intensified, in Congress, in the courts and in the public eye. The issue gained significant public attention through a trio of pro-privacy victories in the federal courts and Congress held a hearing on ECPA reform that focused specifically on location-based services. Despite the efforts of the Digital Due Process Coalition, no bills were introduced in Congress to reform and clarify electronic surveillance statutes. Verdict: Mostly right.

(9) The FTC will continue to threaten to do much more to punish online privacy violations, but it won’t do much to make good on the threats.

As a student of the FTC’s Chief Technologist, I’m not touching this one with a ten-foot pole.

(10) The new Apple tablet will be gorgeous but expensive. It will be a huge hit only if it offers some kind of advance in the basic human interface, such as a really effective full-sized on-screen keyboard.

Gorgeous? Check. Expensive? Check. Huge hit? Check. Advance in the basic human interface? The Reality Distortion Field forces me to say “yes.” Verdict: Mostly right.

(11) The disadvantages of iTunes-style walled garden app stores will become increasingly evident. Apple will consider relaxing its restrictions on iPhone apps, but in the end will offer only rhetoric, not real change.

Apple’s iPhone faced increasingly strong competition from Google’s rival Android platform, and it’s possible this could be attributed to Google’s more liberal policies for allowing apps to run on Android devices. Still, iPhones and iPads continued to sell briskly, and we’re not aware of any major problems arising from Apple’s closed business model. Verdict: Wrong.

(12) Internet Explorer’s usage share will fall below 50 percent for the first time in a decade, spurred by continued growth of Firefox, Chrome, and Safari.

There’s no generally-accepted yardstick for browser usage share, because there are so many different ways to measure it. But Wikipedia has helpfully aggregated browser usage share statistics. All five metrics listed there show the usage share falling by between 5 and 10 percent over the last years, with current values being between 41 to 61 percent. The mean of these statistics is 49.5 percent, and the median is 46.94 percent. Verdict: Right.

(13) Amazon and other online retailers will be forced to collect state sales tax in all 50 states. This will have little impact on the growth of their business, as they will continue to undercut local bricks-and-mortar stores on prices, but it will remove their incentive to build warehouses in odd places just to avoid having to collect sales tax.

State legislators continue to introduce proposals to tax out-of-state retailers, but Amazon has fought hard against these proposals, and so far the company has largely kept them at bay. Verdict: Wrong.

(14) Mobile carriers will continue locking consumers in to long-term service contracts despite the best efforts of Google and the handset manufacturers to sell unlocked phones.

Google’s experiment selling the Nexus One directly to consumers via the web ended in failure after about four months. T-Mobile, traditionally the nation’s least restrictive national wireless carrier, recently made it harder for consumers to find its no-contract “Even More Plus” plans. It’s still possible to get an unlocked phone if you really want one, but you have to pay a hefty premium, and few consumers are bothering. Verdict: Right.

(15) Palm will die, or be absorbed by Research In Motion or Microsoft.

This prediction was almost right. Palm’s Web OS didn’t catch on, and in April the company was acquired by a large IT firm. However, that technology firm was HP, not RIM or Microsoft. Verdict: Half right.

(16) In July, when all the iPhone 3G early adopters are coming off their two-year lock-in with AT&T, there will be a frenzy of Android and other smartphone devices competing for AT&T’s customers. Apple, no doubt offering yet another version of the iPhone at the time, will be forced to cut its prices, but will hang onto its centralized app store. Android will be the big winner in this battle, in terms of gained market share, but there will be all kinds of fragmentation, with different carriers offering slightly different and incompatible variants on Android.

Almost everything we predicted here happened. The one questionable prediction is the price cut, but we’re going to say that this counts. Verdict: Right.

(17) Hackers will quickly sort out how to install their own Android builds on locked-down Android phones from all the major vendors, leading to threatened or actual lawsuits but no successful legal action taken.

The XDA Developers Forum continues to be the locus for this type of Android hacking, and this year it did not disappoint. The Droid X was rooted and the Droid 2 was rooted, along with many other Android phones. The much-anticipated T-Mobile G2 came with a new lock-down mechanism based in hardware. HTC wasn’t initially forthcoming with the legally-mandated requirement to publish their modifications to the Linux source code that implemented this mechanism, but relented after a Freedom to Tinker post generated some heat. The crack took about a month, and now G2 owners are able to install their own Android builds. Verdict: Right.

(18) Twitter will peak and begin its decline as a human-to-human communication medium.

We’re not sure how to measure this prediction, but Twitter recently raised another $200 million in venture capital and its users exchanged 250 billion tweets in 2010. That doesn’t look like decline to us. Verdict: Wrong.

(19) A politican or a candidate will commit a high-profile “macaca”-like moment via Twitter.

We can’t think of any good examples of high-profile cases that severely affected a politician’s prospects in the 2010 elections, like the “macaca” comment did to George Allen’s 2006 Senate campaign. However, there were a number of more low-profile gaffes, including Sarah Palin’s call for peaceful muslims to “refudiate” the “Ground Zero Mosque” (the New Oxford American Dictionary named refudiate its word of the year), then-Senator Chris Dodd’s staff mis-tweeting inappropriate comments and a technical glitch in computer software at the U.S. embassy in Beijing tweeting that the air quality one day was “crazy bad”. Verdict: Mostly wrong.

(20) Facebook customers will become increasingly disenchanted with the company, but won’t leave in large numbers because they’ll have too much information locked up in the site.

In May 2010, Facebook once again changed its privacy policy to make more Facebook user information available to more people. On two occasions, Facebook has faced criticism for leaking user data to advertisers. But the site doesn’t seem to have declined in popularity. Verdict: Right.

(21) The fashionable anti-Internet argument of 2010 will be that the Net has passed its prime, supplanting the (equally bogus) 2009 fad argument that the Internet is bad for literacy.

Wired declared the web dead back in August. Is that the same thing as saying the Net has passed its prime? Bogus arguments all sound the same to us. Verdict: Mostly right.

(22) One year after the release of the Obama Administration’s Open Government Directive, the effort will be seen as a measured success. Agencies will show eagerness to embrace data transparency but will find the mechanics of releasing datasets to be long and difficult. Privacy– how to deal with personal information available in public data– will be one major hurdle.

Many people are calling this open government’s “beta period.” Federal agencies took the landmark step in January by releasing their first “high-value” datasets on Data.gov, but some advocates say these datasets are not “high value” enough. Agencies also published their plans for open government—some were better than others—and implementation of these promises has indeed been incremental. Privacy has been an issue in many cases, but it’s often difficult to know the reasons why an agency decides not to release a dataset. Verdict: Mostly right.

(23) The Open Government agenda will be the bright spot in the Administration’s tech policy, which will otherwise be seen as a business-as-usual continuation of past policies.

As we noted above, the Obama administration has had a pretty good record on open government issues. Probably the most controversial tech policy change has been the FCC’s adoption of new network neutrality rules. These weren’t exactly a continuation of Bush administration policies, but they also didn’t go as far as many activist groups wanted. And we can think of any other major tech policy changes. Verdict: Mostly right.

Our score: 7 right, 8 mostly right, 1 half right, 2 mostly wrong, 4 wrong.

Two Stories about the Comcast/Level 3 Dispute (Part 2)

In my last post I told a story about the Level 3/Comcast dispute that portrays Comcast in a favorable light. Now here’s another story that casts Comcast as the villain.

Story 2: Comcast Abuses Its Market Power

As Steve explained, Level 3 is an “Internet Backbone Provider.” Level 3 has traditionally been considered a tier 1 provider, which means that it exchanges traffic with other tier 1 providers without money changing hands, and bills everyone else for connectivity. Comcast, as a non-tier 1 provider, has traditionally paid Level 3 to carry its traffic to places Comcast’s own network doesn’t reach directly.

Steve is right that the backbone market is highly competitive. I think it’s worth unpacking why this is in a bit more detail. Let’s suppose that a Comcast user wants to download a webpage from Yahoo!, and that both are customers of Level 3. So Yahoo! sends its bits to Level 3, who passes it along to Comcast. And traditionally, Level 3 would bill both Yahoo! and Comcast for the service of moving data between them.

It might seem like Level 3 has a lot of leverage in a situation like this, so it’s worth considering what would happen if Level 3 tried to jack up its prices. There are reportedly around a dozen other tier 1 providers that exchange traffic with Level 3 on a settlement-free basis. This means that if Level 3 over-charges Comcast for transit, Comcast can go to one of Level 3’s competitors, such as Global Crossing, and pay it to carry its traffic to Level 3’s network. And since Global Crossing and Level 3 are peers, Level 3 gets nothing for delivering traffic to Global Crossing that’s ultimately bound for Comcast’s network.

A decade ago, when Internet Service Retailers (to use Steve’s terminology) were much smaller than backbone providers, that was the whole story. The retailers didn’t have the resources to build their own global networks, and their small size meant they had relatively little bargaining power against the backbone providers. So the rule was that Internet Service Retailers charged their customers for Internet access, and then passed some of that revenue along to the backbone providers that offered global connectivity. There may have been relatively little competition in the retailer market, but this didn’t have much effect on the overall structure of the Internet because no single retailer had enough market power to go toe-to-toe with the backbone providers.

Two Stories about the Comcast/Level 3 Dispute (Part 1)

Like Steve and a lot of other people in the tech policy world, I’ve been trying to understand the dispute between Level 3 and Comcast. The combination of technical complexity and commercial secrecy has made the controversy almost impenetrable for anyone outside of the companies themselves. And of course, those who are at the center of the action have a strong incentive to mislead the public in ways that makes their own side look better.

So building on Steve’s excellent post, I’d like to tell two very different stories about the Level 3/Comcast dispute. One puts Level 3 in a favorable light and the other slants things more in Comcast’s favor.

Story 1: Level 3 Abuses Its Customer Relationships

As Steve explained, a content delivery network (CDN) is a network of caching servers that help content providers deliver content to end users. Traditionally, Netflix has used CDNs like Akamai and Limelight to deliver its content to customers. The dispute began shortly after Level 3 beat out these CDN providers for the Netflix contract.

Google Attacks Highlight the Importance of Surveillance Transparency

Ed posted yesterday about Google’s bombshell announcement that it is considering pulling out of China in the wake of a sophisticated attack on its infrastructure. People more knowledgeable than me about China have weighed in on the announcement’s implications for the future of US-Sino relations and the evolution of the Chinese Internet. Rebecca MacKinnon, a China expert who will be a CITP visiting scholar beginning next month, says that “Google has taken a bold step onto the right side of history.” She has a roundup of Chinese reactions here.

One aspect of Google’s post that hasn’t received a lot of attention is Google’s statement that “only two Gmail accounts appear to have been accessed, and that activity was limited to account information (such as the date the account was created) and subject line, rather than the content of emails themselves.” A plausible explanation for this is provided by this article (via James Grimmelmann) at PC World:

Drummond said that the hackers never got into Gmail accounts via the Google hack, but they did manage to get some “account information (such as the date the account was created) and subject line.”

That’s because they apparently were able to access a system used to help Google comply with search warrants by providing data on Google users, said a source familiar with the situation, who spoke on condition of anonymity because he was not authorized to speak with the press.

Obviously, this report should be taken with a grain of salt since it’s based on a single anonymous source. But it fits a pattern identified by our own Jen Rexford and her co-authors in an excellent 2007 paper: when communications systems are changed to make it easier for US authorities to conduct surveillance, it necessarily increases the vulnerability of those systems to attacks by other parties, including foreign governments.

Rexford and her co-authors point to a 2006 incident in which unknown parties exploited vulnerabilities in Vodafone’s network to tap the phones of dozens of senior Greek government officials. According to news reports, these attacks were made possible because Greek telecommunications carriers had deployed equipment with built-in surveillance capabilities, but had not paid the equipment vendor, Ericsson, to activate this “feature.” This left the equipment in a vulnerable state. The attackers surreptitiously switched on the surveillance capabilities and used it to intercept the communications of senior government officials.

It shouldn’t surprise us that systems built to give law enforcement access to private communications could become vectors for malicious attacks. First, these interfaces are often backwaters in the system design. The success of any consumer product is going to depend on its popularity with customers. Therefore, a vendor or network provider is going to deploy its talented engineers to work on the public-facing parts of the product. It is likely to assign a smaller team of less-talented engineers to work on the law-enforcement interface, which is likely to be both less technically interesting and less crucial to the company’s bottom line.

Second, the security model of a law enforcement interface is likely to be more complex and less well-specified than the user-facing parts of the service. For the mainstream product, the security goal is simple: the customer should be able to access his or her own data and no one else’s. In contrast, determining which law enforcement officials are entitled to which information, and how those officials are to be authenticated, can become quite complex. Greater complexity means a higher likelihood of mistakes.

Finally, the public-facing portions of a consumer product benefit from free security audits from “white hat” security experts like our own Bill Zeller. If a publicly-facing website, cell phone network or other consumer product has a security vulnerability, the company is likely to hear about the problem first from a non-malicious source. This means that at least the most obvious security problems will be noticed and fixed quickly, before the bad guys have a chance to exploit them. In contrast, if an interface is shrouded in secrecy, and only accessible to law enforcement officials, then even obvious security vulnerabilities are likely to go unnoticed and unfixed. Such an interface will be a target-rich environment if a malicious hacker ever does get the opportunity to attack it.

This is an added reason to insist on rigorous public and judicial oversight of our domestic surveillance capabilities in the United States. There has been a recent trend, cemented by the 2008 FISA Amendments toward law enforcement and intelligence agencies conducting eavesdropping without meaningful judicial (to say nothing of public) scrutiny. Last month, Chris Soghoian uncovered new evidence suggesting that government agencies are collecting much more private information than has been publicly disclosed. Many people, myself included, oppose this expansion of domestic surveillance grounds on civil liberties grounds. But even if you’re unmoved by those arguments, you should still be concerned about these developments on national security grounds.

As long as these eavesdropping systems are shrouded in secrecy, there’s no way for “white hat” security experts to even begin evaluating them for potential security risks. And that, in turn, means that voters and policymakers will be operating in the dark. Programs that risk exposing our communications systems to the bad guys won’t be identified and shut down. Which means the culture of secrecy that increasingly surrounds our government’s domestic spying programs not only undermines the rule of law, it’s a danger to national security as well.

Update: Props to my colleague Julian Sanchez, who made the same observation 24 hours ahead of me.

The Trouble with PACER Fees

One sentiment I’ve seen in a number of people express about our release of RECAP is illustrated by this comment here at Freedom to Tinker:

Technically impressive, but also shortsighted. There appears a socialistic cultural trend that seeks to disconnect individual accountability to ones choices. $.08 a page is hardly burdensome or profitable, and clearly goes to offset costs. If additional taxes are required to make up the shortfall RECAP seems likely to create, we all will pay more in general taxes even though only a small few ever access PACER.

Now, I don’t think anyone who’s familiar with my work would accuse me of harboring socialistic sympathies. RECAP has earned the endorsement of my colleague Jim Harper of the libertarian Cato Institute and Christopher Farrell of the conservative watchdog group Judicial Watch. Those guys are not socialists.

Still, there’s a fair question here: under the model we advocate, taxpayers might wind up picking up some of the costs currently being bourne by PACER users. Why should taxpayers in general pay for a service that only a tiny fraction of the population will ever use?

I think there are two answers. The narrow answer is that this misunderstands where the costs of PACER come from. There are four distinct steps in the process of publishing a judicial record. First, someone has to create the document. This is done by a judge in some cases and by private litigants in others. Second, someone has to convert the document to electronic format. This is a small and falling cost, because both judges and litigants increasingly produce documents using word processors, so they’re digital from their inception. Third, someone has to redact the documents to ensure private information doesn’t leak out. This is supposed to be done by private parties when they submit documents, but they don’t always do what they’re supposed to, necessitating extra work by court personnel. Finally, the documents need to be uploaded to a website where they can be downloaded by the public.

The key thing to understand here is that the first three steps are things that the courts would be doing anyway if PACER didn’t exist. Court documents were already public records before PACER came onto the scene. Anyone could (and still can) drive down to the courthouse, request any unsealed document they want, and make as many photocopies as they wish. Moreover, even if documents weren’t public, the courts would likely still be transitioning to an electronic system for their internal use.

So this means that the only additional cost of PACER, beyond the activities the courts would be doing anyway, is the web servers, bandwidth, and personnel required to run the PACER web sites themselves. But RECAP users imposes no additional load on PACER’s servers. Users download RECAP documents directly from the Internet Archive. So RECAP is entirely consistent with the principle that PACER users should pay for the resources they use.

I think there’s also a deeper answer to this question, which is that it misunderstands the role of the judiciary in a free society. The service the judiciary provides to the public is not the production of individual documents or even the resolution of individual cases. The service it provides is the maintenance of a comprehensive and predictable system of law that is the foundation for our civilization. You benefit from this system whether or not you ever appear in court because it gives you confidence that your rights will be protected in a fair and predictable manner. And in particular, you benefit from judicial transparency because transparency improves accountability. Even if you’re not personally interested in monitoring the judiciary for abuses, you benefit when other people do so.

This is something I take personally because I’ve done a bit of legal reporting myself. I obviously get some direct benefits from doing this—I sometimes get paid and it’s always fun to have people read my work. But I like to think that my writing about the law also benefits society at large by increasing public understanding and scrutiny of the judicial system. And charging for access to the law will be most discouraging to people like me who are trying to do more than just win a particular court case. Journalists, public interest advocates, academics, and the like generally don’t have clients they can bill for the expense of legal research, so PACER fees are a significant barrier.

There’s no conflict between a belief in free markets and a belief that everyone is entitled to information about the legal system that governs their lives. To the contrary, free markets depend on the “rules of the game” being fair and predictable. The kind of judicial transparency that RECAP hopes to foster only furthers that goal.