June 16, 2024

Computing in the Cloud, January 14-15 in Princeton

The agenda for our workshop on the social and policy implications of “Computing in the Cloud” is now available, along with information about how to register (for free). We have a great lineup of speakers, with panels on “Possession and ownership of data“, “Security and risk in the cloud“, “Civics in the cloud“, and “What’s next“. The workshop is organized by the Center for InfoTech Policy at Princeton, and sponsored by Microsoft.

Don’t miss it!

Lessons from Facebook's Beacon Misstep

Facebook recently beat a humiliating retreat from Beacon, its new system for peer-based advertising, in the face of users’ outrage about the system’s privacy implications. (When you bought or browsed products on certain third-party sites, Beacon would show your Facebook friends what you had done.)

Beacon was a clever use of technology and might have brought Facebook significant ad revenue, but it seemed a pretty obvious nonstarter from users’ point of view. Trying to deploy it, especially without a strong opt-out capability, was a mistake. On the theory that mistakes are often instructive, let’s take a few minutes to work through possible lessons from the Beacon incident.

To start, note that this wasn’t a privacy accident, where user data is leaked because of a bug, procedural breakdown, or treacherous employee. Facebook knew exactly what it was doing, and thought it was making a good business decision. Facebook obviously didn’t foresee their users’ response to Beacon. Though the money – not to mention the chance to demonstrate business model innovation – must have been a powerful enticement, the decision to proceed with Beacon could only have made sense if the company thought a strong user backlash was unlikely.

Organizations often have trouble predicting what will cause privacy outrage. The classic example is the U.S. government’s now-infamous Total Information Awareness program. TIA’s advocates in the government were honestly surprised when the program’s revelation caused a public furor. This wasn’t just public posturing. I still remember a private conversation I had with a TIA official who ridiculed my suggestion that the program might turn out to be controversial. This blindness contributed to the program’s counterproductive branding such as the creepy all-seeing-eye logo. Facebook’s error was similar, though of much smaller magnitude.

Of course, privacy is not the only area where organizations misjudge their clients’ preferences. But there does seem to be something about privacy that makes these sorts of errors more common.

What makes privacy different? I’m not entirely certain, but since I owe you at least a strawman answer, let me suggest some possibilities.

(1) Overlawyerization: Organizations see privacy as a legal compliance problem. They’re happy as long as what they’re doing doesn’t break the law; so they do something that is lawful but foolish.

(2) Institutional structure: Privacy is spun off to a special office or officer so the rest of the organization doesn’t have to worry about it; and the privacy office doesn’t have the power to head off mistakes.

(3) Treating privacy as only a PR problem: Rather than asking whether its practices are really acceptable to clients, the organization does what it wants and then tries to sell its actions to clients. The strategy works, until angry clients seize control of the conversation.

(4) Undervaluing emotional factors: The organization sees a potential privacy backlash as “only” an emotional response, which must take a backseat to more important business factors. But clients might be angry for a reason; and in any case they will act on their anger.

(5) Irrational desire for control: Decisionmakers like to feel that they’re in control of client interactions. Sometimes they insist on control even when it would be rational to follow the client’s lead. Where privacy is concerned, they want to decide what clients should want, rather than listening to what clients actually do want.

Perhaps the underlying cause is the complex and subtle nature of privacy. We agree that privacy matters, but we don’t all agree on its contours. It’s hard to offer precise rules for recognizing a privacy problem, but we know one when we see it. Or t least we know it after we’ve seen it.

Workshop: Computing in the Cloud

I’m excited to announce that Princeton’s Center for InfoTech Policy is putting on a workshop on the policy and social implications of “Computing in the Cloud” – the trend where companies, rather than users, store and manage an increasing range of personal data.

Examples include Hotmail and Gmail replacing desktop email, YouTube taking over as a personal video platform, and Flickr competing with desktop photo storage solutions. Facebook, Myspace and other social networks have pioneered new kinds of tools that couldn’t exist on the desktop, and more new models are sure to emerge.

I’m confident that this trend will reshape tech policy, and will change how people relate to technology. But I don’t know what the changes are. By drawing together experts from computer science, industry, government and law, I hope the Center can help those of us at Princeton, and workshop participants from around the country, get a better sense of where things might be headed.

The workshop will be held on the Princeton campus on January 14 and 15, 2008. It will be free and open to the public. We will have a series of panel discussions, interspersed with opportunities for informal exchanges of ideas. We’re still putting together the list of panels and panelists, so we haven’t yet published a schedule. If you’re interested in attending or want to get email updates about the workshop, please email David Robinson (dgr at princeton dot edu).

Here are some of the possible themes for panels we are exploring:

  • Possession and ownership of data: In cloud computing, a provider’s data center holds information that would more traditionally have been stored on the end user’s computer. How does this impact user privacy? To what extent do users “own” this data, and what obligations do the service providers have? What obligations should they have? Does moving the data to the provider’s data center improve security or endanger it?
  • Collaboration and globalization: Cloud computing systems offer new sharing and collaboration features beyond what was possible before. They make shared creation among far-flung users easier, allow or require data to be stored in many different jurisdictions, and give users access to offerings that may be illegal in the users’ home countries. How will local laws, when applied to data centers whose user base is global, affect users practice? Do these services drive forward economic growth — and if so, what effect should that fact have on the policy debate?
  • New roles for new intermediaries: Cloud services often involve new
    intermediaries such as Facebook, MySpace, eBay, and Second Life, standing between people who might have interacted more directly before these services emerged. To what extent are these services “communities”, as their providers claim? How much control do users feel over these communities? How much control do and should users actually have? How does the centralized nature of these intermediaries affect the efficiency and diversity of online experiences? Can the market protect consumers and competition, or is government oversight needed?
  • What’s next: What new services might develop, and how will today’s services evolve? How well will cloud computing be likely to serve users, companies, investors, government, and the public over the longer run? Which social and policy problems will get worse due to cloud computing, and which will get better?

Why the 09ers Are So Upset

The user revolt at Digg and elsewhere, over attempts to take down the now-famous “09 F9 …” number, is now all over the press. (Background: 1, 2) Many non-techies, including some reporters, wonder why users care so much about this. What is it about “09F9…” that makes people willing to defend it by making T-shirts, writing songs, or subjecting their dotcom startup to lawsuit risk?

The answer has several parts. The first answer is that it’s a reaction against censorship. Net users hate censorship and often respond by replicating the threatened content. When Web companies take down user-submitted content at the behest of big media companies, that looks like censorship. But censorship by itself is not the whole story.

The second part of the answer, and the one most often missed by non-techies, is the fact that the content in question is an integer – an ordinary number, in other words. The number is often written in geeky alphanumeric format, but it can be written equivalently in a more user-friendly form like 790,815,794,162,126,871,771,506,399,625. Giving a private party ownership of a number seems deeply wrong to people versed in mathematics and computer science. Letting a private group pick out many millions of numbers (like the AACS secret keys), and then simply declare ownership of them, seems even worse.

While it’s obvious why the creator of a movie or a song might deserve some special claim over the use of their creation, it’s hard to see why anyone should be able to pick a number at random and unilaterally declare ownership of it. There is nothing creative about this number – indeed, it was chosen by a method designed to ensure that the resulting number was in no way special. It’s just a number they picked out of a hat. And now they own it?

As if that’s not weird enough, there are actually millions of other numbers (other keys used in AACS) that AACS LA claims to own, and we don’t know what they are. When I wrote the thirty-digit number that appears above, I carefully avoided writing the real 09F9 number, so as to avoid the possibility of mind-bending lawsuits over integer ownership. But there is still a nonzero probability that AACS LA thinks it owns the number I wrote.

When the great mathematician Leopold Kronecker wrote his famous dictum, “God created the integers; all else is the work of man”, he meant that the basic structure of mathematics is part of the design of the universe. What God created, AACS LA now wants to take away.

The third part of the answer is that the link between the 09F9 number and the potential harm of copyright infringement is pretty tenuous. AACS LA tells everyone who will listen that the discovery and distribution of the 09F9 number is no real threat to the viability of AACS or the HD-DVD/Blu-ray formats. A person getting the 09F9 number could, if he or she is technically skillful, invest a lot of work to get access to movies. But there are easier, less tech-intensive ways to get the same movies. Publishing the number has approximately zero impact on copyright infringement.

Which brings us to the civil disobedience angle. It’s no secret that many in the tech community despise the DMCA’s anticircumvention provisions. If you’re going to defy a law to show your disagreement with it, you’ll look for a situation where (1) the application of the law is especially inappropriate, (2) your violation does no actual harm, and (3) many others are doing the same thing so the breadth of opposition to the law is evident. That’s what we see here.

It will be interesting to see what AACS LA does next. My guess is that they’ll cut their losses, refrain from sending demand letters and filing lawsuits, and let the 09F9 meme run its course.

Digg Users Revolt Over AACS Key

I wrote yesterday about efforts by AACS LA, the entity that controls the AACS copy protection system used in HD-DVD and Blu-ray discs, to stop people from republishing a sixteen-byte cryptographic key that can unlock most existing discs. Much of the action took place at Digg, a site that aggregates Web page recommendations from many people. (At Digg, you can recommend pages on the Web that you find interesting, and Digg will show you the most-recommended pages in various categories.

Digg had received a demand letter from AACS LA, asking Digg to take down links to sites containing the key. After consulting with lawyers, Digg complied, and Digg’s administrators started canceling entries on the site.

Then Digg’s users revolted. As word got around about what Digg was doing, users launched a deluge of submissions to Digg, all mentioning or linking to the key. Digg’s administrators tried to keep up, but submissions showed up faster than the administrators could cancel them. For a while yesterday, the entire front page of Digg – the “hottest” pages according to Digg’s algorithms – consisted of links to the AACS key.

Last night, Digg capitulated to its users. Digg promised to stop removing links to the key, and Digg founder Kevin Rose even posted the key to the site himself. Rose wrote on Digg’s official blog,

In building and shaping the site I’ve always tried to stay as hands on as possible. We’ve always given site moderation (digging/burying) power to the community. Occasionally we step in to remove stories that violate our terms of use (eg. linking to pornography, illegal downloads, racial hate sites, etc.). So today was a difficult day for us. We had to decide whether to remove stories containing a single code based on a cease and desist declaration. We had to make a call, and in our desire to avoid a scenario where Digg would be interrupted or shut down, we decided to comply and remove the stories with the code.

But now, after seeing hundreds of stories and reading thousands of comments, you’ve made it clear. You’d rather see Digg go down fighting than bow down to a bigger company. We hear you, and effective immediately we won’t delete stories or comments containing the code and will deal with whatever the consequences might be.

If we lose, then what the hell, at least we died trying.

This is a remarkable event. Critics of Web 2.0 technologies like Digg often say that users are being exploited, that the “communities” on these sites are shams and the company running the site is really in control. Here, the Digg company found that it doesn’t entirely control the Digg site – if users want something on the site badly enough, they can put it there. If Digg wasn’t going to shut down entirely (or become clogged with postings of the key), it had no choice but to acquiesce and allow links to the key. But Digg went beyond acquiescence, siding with its users against AACS LA, by posting the key itself and practically inviting a lawsuit from AACS LA.

Digg’s motive here probably has more to do with profit and market share than with truth, justice, and the American way. It’s not a coincidence that Digg’s newly discovered values coincide with the desires of its users. Still, the important fact is that users could bend Digg to their will. It turns out that the “government” of Digg’s community gets its power from the consent of the governed. Users of other Web 2.0 sites will surely take note.