The Idaho Statesman, my sort-of-local newspaper, just announced that it will follow the lead of the Miami Herald and no longer allow readers to post anonymous comments to online stories. Starting September 15, readers who want to make comments will have to login through Facebook. This is the second time I’ve encountered a mandatory Facebook login for users trying to gain access to a third-party service. The first time was when I tried to sign up last year for the music streaming service Spotify. (Spotify now allows users to create an account using an email address, but it didn’t always.) I’m not a Facebook fan for reasons related to Facebook’s privacy and information practices, but that’s really neither here nor there. The question is whether I should have to be a Facebook user to access services on the Internet that have no natural or necessary connection to Facebook. I’m not talking here about giving users the option to login through Facebook if they want to share their online activities with Facebook friends. I’m talking about conditioning access to a non-Facebook service, or to some aspect of that service, on a user’s having a Facebook account. Internet users are accustomed to dealing with lots of intermediaries, from broadband providers to search engines, to get access to services and information. The Internet is all about mediated transfers of information. I get that. But this strikes me as a troubling new layer of intermediation.
Since the historic snow storm, “Nemo,” deposited a NOAA-certified 40 inches of snow on my hometown of Hamden, CT, I have been watching from afar to see how the town and its citizens are using a combination of digital technology, the traditional telecommunications network, and mass media to communicate in the aftermath of the storm. While I have been lucky enough not to have been directly affected by the historic storm, my senior citizen parents have been inside their house waiting for a snow plow to come for approximately five days. Since they are healthy and have food and heat, I have the luxury of writing about the use of communications technology by Hamden’s government during this weather emergency. The purpose of this post is not to pile onto an already overwhelmed town government, but to highlight fairly easily achievable improvements that Hamden’s government could make in its emergency communications that will make residents of the town safer the next time an emergency occurs.
On Friday morning, I woke up and heard the Mayor of Hamden, Scott Jackson, on CNN stating about the storm, “It’s a Disaster.” I was impressed to hear the Mayor of my approximately 60,000 person hometown with a national and international forum to talk about the weather emergency and recovery efforts. I figured this was only the first step in the process of informing town residents about what they could expect over the next few days. However, based on reviewing “The Town of Hamden, Connecticut” Facebook page, e-mails sent from the Mayor’s Office, the Mayor’s Twitter feed, and having conversations with my parents, there are three specific areas where the town could have communicated more effectively during this weather emergency. These failures of communication sell short the heroic work of the people working around the clock to plow the streets and respond to emergencies.
There’s a meme going around on Facebook, saying that you should post a certain legal incantation on your Facebook wall, to reclaim certain rights that Facebook would otherwise be taking from you. There’s an interesting counter-meme in the press now, saying that all of this is pointless and of course you can’t change your rights just by posting a statement on a website. Both memes have something to teach us about perceptions of rights and responsibilities online.
This semester, Ed Felten and I are teaching a Freshman Seminar called “Facebook: The Social Impact of Social Networks.” This week, the class is discussing a recent article published in the journal Nature, entitled “A 61-Million-Person Experiment in Social Influence and Political Mobilization“. The study reveals that if Facebook shows you a list of your closest friends who have voted, you are more likely to do so yourself. It is a fascinating read both because it is probably the first very-large-scale controlled test of social influence via online social networks, and because it appears that without much work the company was able to spur about 340,000 extra people to vote in the 2010 midterm elections.
I confess that last night I watched some of the wildly popular reality TV competition The Voice. What can I say? The pyrotechnics were more calming than the amped-up CNN spin-zoners. It was the first day that the at-home audience began voting for their favorites. Carson Daly mentioned that the show would take the requisite break on Election Night, but return in force on Wednesday. (Incidentally, I can’t decide whether or not this video urging us to “vote Team Cee-Lo” is too clever by half).
The Global Network Initiative (GNI) was founded in October 2008 to help technology firms navigate the political implications of their success. Engineers at the world’s leading technology firms have been incredibly innovative, but do not always the global dynamics of their innovation. Moreover, they do not always acknowledge the ways in which politicians get involved with the design process. The creation of the GNI signaled that some in the technology sector were ready to start having more open conversations. Facebook recently joined the GNI–now four years old–as an “observer”. But the company’s founder Mark Zuckerberg also traveled to Moscow to meet with that country’s tech-savvy second-in-command, Dmitri Medvedev. With the anniversary of the GNI in mind, let’s consider the different ways of interpreting the Zuckerberg-Medvedev summit.
I’m a fellow at the Center for Information Technology Policy at Princeton this year. My first months here have already been amazing. I’m pleased to be joining this blog as well!
My conceptual toolkit and my method comes mostly from sociology, but I’m also a former computer programmer. That means that I feel welcome in a place where policy people and computer scientists collaborate. My interests revolve around how technology and society interact, and I’ve been enjoying having these conversations with many new people. I research a variety of topics concerning the social impacts of technology — things like social interaction, collective action, and privacy & publicity. I’m also enjoying teaching a course this Fall at the Woodrow Wilson School called “New Media and Social Movements: New Tools for an Old Game” (syllabus here – PDF).
Now that the FCC has finally acted to safeguard network neutrality, the time has come to take the next step toward creating a level playing field on the rest of the Information Superhighway. Network neutrality rules are designed to ensure that large telecommunications companies do not squelch free speech and online innovation. However, it is increasingly evident that broadband companies are not the only threat to the open Internet. In short, federal regulators need to act now to safeguard social network neutrality.
The time to examine this issue could not be better. Facebook is the dominant social network in countries other than Brazil, where everybody uses Friendster or something. Facebook has achieved near-monopoly status in the social networking market. It now dominates the web, permeating all aspects of the information landscape. More than 2.5 million websites have integrated with Facebook. Indeed, there is evidence that people are turning to social networks instead of faceless search engines for many types of queries.
Social networks will soon be the primary gatekeepers standing between average Internet users and the web’s promise of information utopia. But can we trust them with this new-found power? Friends are unlikely to be an unbiased or complete source of information on most topics, creating silos of ignorance among the disparate components of the social graph. Meanwhile, social networks will have the power to make or break Internet businesses built atop the enormous quantity of referral traffic they will be able to generate. What will become of these businesses when friendships and tastes change? For example, there is recent evidence that social networks are hastening the decline of the music industry by promoting unknown artists who provide their music and streaming videos for free.
Social network usage patterns reflect deep divisions of race and class. Unregulated social networks could rapidly become virtual gated communities, with users cut off from others who could provide them with a diversity of perspectives. Right now, there’s no regulation of the immense decision-influencing power that friends have, and there are no measures in place to ensure that friends provide a neutral and balanced set of viewpoints. Fortunately, policy-makers have a rare opportunity to preempt the dangerous consequences of leaving this new technology to develop unchecked.
The time has come to create a Federal Friendship Commission to ensure that the immense power of social networks is not abused. For example, social network users who have their friend requests denied currently have no legal recourse. Users should have the option to appeal friend rejections to the FFC to verify that they don’t violate social network neutrality. Unregulated social networks will give many users a distorted view of the world dominated by the partisan, religious, and cultural prejudices of their immediate neighbors in the social graph. The FFC can correct this by requiring social networks to give equal time to any biased wall post.
However, others have suggested lighter-touch regulation, simply requiring each person to have friends of many races, religions, and political persuasions. Still others have suggested allowing information harms to be remedied through direct litigation—perhaps via tort reform that recognizes a new private right of action against violations of the “duty to friend.” As social networking software will soon be found throughout all aspects of society, urgent intervention is needed to forestall “The Tyranny of The Farmville.”
Of course, social network neutrality is just one of the policy tools regulators should use to ensure a level playing field. For example, the Department of Justice may need to more aggressively employ its antitrust powers to combat the recent dangerous concentration of social networking market share on popular micro-blogging services. But enacting formal social network neutrality rules is an important first step towards a more open web.
The Wall Street Journal today reports that many Facebook applications are handing over user information—specifically, Facebook IDs—to online advertisers. Since a Facebook ID can easily be linked to a user’s real name, third party advertisers and their downstream partners can learn the names of people who load their advertisement from those leaky apps. This reportedly happens on all ten of Facebook’s most popular apps and many others.
The Journal article provides few technical details behind what they found, so here’s a bit more about what I think they’re reporting.
The content of a Facebook application, for example FarmVille, is loaded within an iframe on the Facebook page. An iframe essentially embeds one webpage (FarmVille) inside another (Facebook). This means that as you play FarmVille, your browser location bar will show http://apps.facebook.com/onthefarm, but the iframe content is actually controlled by the application developer, in this case by farmville.com.
The content loaded by farmville.com in the iframe contains the game alongside third party advertisements. When your browser goes to fetch the advertisement, it automatically forwards to the third party advertiser “referer” information—that is, the URL of the current page that’s loading the ad. For FarmVille, the URL referer that’s sent will look something like:
http://fb-tc-2.farmville.com/flash.php?…fb_sig_user=[User’s Facebook ID]…
And there’s the issue. Because of the way Zynga (the makers of FarmVille) crafts some of its URLs to include the user’s Facebook ID, the browser will forward this identifying information on to third parties. I confirmed yesterday evening that using FarmVille does indeed transmit my Facebook ID to a few third parties, including Doubleclick, Interclick and socialvi.be.
But evidence clearly indicates otherwise.
What can be done about this? First, application developers like Zynga can simply stop including the user’s Facebook ID in the HTTP GET arguments, or they can place a “#” mark before the sensitive information in the URL so browsers don’t transmit this information automatically to third parties.
Second, Facebook can implement a proxy scheme, as proposed by Adrienne Felt more than two years ago, where applications would not receive real Facebook IDs but rather random placeholder IDs that are unique for each application. Then, application developers can be free do whatever they want with the placeholder IDs, since they can no longer be linked back to real user names.
Third, browser vendors can give users easier and better control over when HTTP referer information is sent. As Chris Soghoian recently pointed out, browser vendors currently don’t make these controls very accessible to users, if at all. This isn’t a direct solution to the problem but it could help. You could imagine a privacy-enhancing opt-in browser feature that turns off the referer header in all cross-domain situations.
Some may argue that this leak, whether inadvertent or not, is relatively innocuous. But allowing advertisers and other third parties to easily and definitively correlate a real name with an otherwise “anonymous” IP address, cookie, or profile is a dangerous path forward for privacy. At the very least, Facebook and app developers need to be clear with users about their privacy rights and comply with their own stated policies.
I have a piece in today’s NY Times “Room for Debate” feature, on whether the government should regulate Facebook. In writing the piece, I was looking for a pithy way to express the problems with today’s notice-and-consent model for online privacy. After some thought, I settled on “privacy theater”.
Bruce Schneier has popularized the term “security theater,” denoting security measures that look impressive but don’t actually protect us—they create the appearance of security but not the reality. When a security guard asks to see your ID but doesn’t do more than glance at it, that’s security theater. Much of what happens at airport checkpoints is security theater too.
Worse yet. privacy policies are subject to change. When sites change their policies, we get another round of privacy theater, in which sites pretend to notify us of the changes, and we pretend to consider them before continuing our use of the site.
And yet, if we’re going to replace the notice-and-consent model, we need something else to put in its place. At this point, It’s hard to see what that might be. It might help to set up default rules, on the theory that a policy that states how it differs from the default might be shorter and simpler than a stand-alone policy, but that approach will only go so far.
In the end, we may be stuck with privacy theater, just as we’re often stuck with security theater. If we can’t provide the reality of privacy or security, we can settle for theater, which at least makes us feel a bit better about our vulnerability.
Facebook is once again clashing with its users over privacy. As a user myself, I was pretty unhappy about the recently changed privacy control. I felt that Facebook was trying to trick me into loosening controls on my information. Though the initial letter from Facebook founder Mark Zuckerberg painted the changes as pro-privacy — which led more than 48,000 users to click the “I like this” button — the actual effect of the company’s suggested new policy was to allow more public access to information. Though the company has backtracked on some of the changes, problems remain.
Some of you may be wondering why Facebook users are complaining about privacy, given that the site’s main use is to publish private information about yourself. But Facebook is not really about making your life an open book. It’s about telling the story of your life. And like any autobiography, your Facebook-story will include a certain amount of spin. It will leave out some facts and will likely offer more and different levels of detail depending on the audience. Some people might not get to hear your story at all. For Facebook users, privacy means not the prevention of all information flow, but control over the content of their story and who gets to read it.
So when Facebook tries to monetize users’ information by passing that information along to third parties, such as advertisers, users get angry. That’s what happened two years ago with Facebook’s ill-considered Beacon initiative: Facebook started telling advertisers what you had done — telling your story to strangers. But perhaps even worse, Facebook sometimes added items to your wall about what you had purchased — editing your story, without your permission. Users revolted, and Facebook shuttered Beacon.
Viewed through this lens, Facebook’s business dilemma is clear. The company is sitting on an ever-growing treasure trove of information about users. Methods for monetizing this information are many and obvious, but virtually all of them require either telling users’ stories to third parties, or modifying users’ stories — steps that would break users’ mental model of Facebook, triggering more outrage.
What Facebook has, in other words, is a governance problem. Users see Facebook as a community in which they are members. Though Facebook (presumably) has no legal obligation to get users’ permission before instituting changes, it makes business sense to consult the user community before making significant changes in the privacy model. Announcing a new initiative, only to backpedal in the face of user outrage, can’t be the best way to maximize long-term profits.
The challenge is finding a structure that allows the company to explore new business opportunities, while at the same time securing truly informed consent from the user community. Some kind of customer advisory board seems like an obvious approach. But how would the members be chosen? And how much information and power would they get? This isn’t easy to do. But the current approach isn’t working either. If your business is based on user buy-in to an online community, then you have to give that community some kind of voice — you have to make it a community that users want to inhabit.