Yesterday I posted some thoughts about Purdue University’s decision to destroy a video recording of my keynote address at its Dawn or Doom colloquium. The organizers had gone dark, and a promised public link was not forthcoming. After a couple of weeks of hoping to resolve the matter quietly, I did some digging and decided […]
Arlington v. FCC: What it Means for Net Neutrality
[Cross-posted on my blog, Managing Miracles] On Monday, the Supreme Court handed down a decision in Arlington v. FCC. At issue was a very abstract legal question: whether the FCC has the right to interpret the scope of its own authority in cases in which congress has left the contours of their jurisdiction ambiguous. In […]
If Reddit Really Regrets "Not Taking Stronger Action Sooner", What Will It Do in the Future?
[Editors note: The New York Times weighed in with “When the Web’s Chaos Takes an Ugly Turn“, which includes several quotes from Tufekci.] Reddit may be the most important Internet forum that you have never heard of. It has more than a billion page-views a month, originates many Internet memes, brilliantly exposes hoaxes, hosts commentary […]
A Free Internet, If We Can Keep It
“We stand for a single internet where all of humanity has equal access to knowledge and ideas. And we recognize that the world’s information infrastructure will become what we and others make of it. ”
These two sentences, from Secretary of State Clinton’s groundbreaking speech on Internet freedom, sum up beautifully the challenge facing our Internet policy. An open Internet can advance our values and support our interests; but we will only get there if we make some difficult choices now.
One of these choices relates to anonymity. Will it be easy to speak anonymously on the Internet, or not? This was the subject of the first question in the post-speech Q&A:
QUESTION: You talked about anonymity on line and how we have to prevent that. But you also talk about censorship by governments. And I’m struck by – having a veil of anonymity in certain situations is actually quite beneficial. So are you looking to strike a balance between that and this emphasis on censorship?
SECRETARY CLINTON: Absolutely. I mean, this is one of the challenges we face. On the one hand, anonymity protects the exploitation of children. And on the other hand, anonymity protects the free expression of opposition to repressive governments. Anonymity allows the theft of intellectual property, but anonymity also permits people to come together in settings that gives them some basis for free expression without identifying themselves.
None of this will be easy. I think that’s a fair statement. I think, as I said, we all have varying needs and rights and responsibilities. But I think these overriding principles should be our guiding light. We should err on the side of openness and do everything possible to create that, recognizing, as with any rule or any statement of principle, there are going to be exceptions.
So how we go after this, I think, is now what we’re requesting many of you who are experts in this area to lend your help to us in doing. We need the guidance of technology experts. In my experience, most of them are younger than 40, but not all are younger than 40. And we need the companies that do this, and we need the dissident voices who have actually lived on the front lines so that we can try to work through the best way to make that balance you referred to.
Secretary Clinton’s answer is trying to balance competing interests, which is what good politicians do. If we want A, and we want B, and A is in tension with B, can we have some A and some B together? Is there some way to give up a little A in exchange for a lot of B? That’s a useful way to start the discussion.
But sometimes you have to choose — sometimes A and B are profoundly incompatible. That seems to be the case here. Consider the position of a repressive government that wants to spy on a citizen’s political speech, as compared to the position of the U.S. government when it wants to eavesdrop on a suspect’s conversations under a valid search warrant. The two positions are very different morally, but they are pretty much the same technologically. Which means that either both governments can eavesdrop, or neither can. We have to choose.
Secretary Clinton saw this tension, and, being a lawyer, she saw that law could not resolve it. So she expressed the hope that technology, the aspect she understood least, would offer a solution. This is a common pattern: Given a difficult technology policy problem, lawyers will tend to seek technology solutions and technologists will tend to seek legal solutions. (Paul Ohm calls this “Felten’s Third Law”.) It’s easy to reject non-solutions in your own area because you have the knowledge to recognize why they will fail; but there must be a solution lurking somewhere in the unexplored wilderness of the other area.
If we’re forced to choose — and we will be — what kind of Internet will we have? In Secretary Clinton’s words, “the world’s information infrastructure will become what we and others make of it.” We’ll have a free Internet, if we can keep it.
Watching Google's Gatekeepers
Google’s legal team has extraordinary power to decide which videos can be seen by audiences around the world, according to Jeffrey Rosen’s piece, Google’s Gatekeepers in yesterday’s New York Times magazine. Google, of course, owns YouTube, which gives it the technical ability to block particular videos — though of course so many videos are submitted that it’s impractical to review them all in advance.
Some takedown requests are easy — content that is offensive and illegal (almost) everywhere will come own immediately once a complaint is received and processed. But Rosen focuses on more difficult cases, where a government asks YouTube to take down a video that expresses dissent or is otherwise inconvenient for that government. Sometimes these videos violate local laws, but more often their legal status is murky and in any case the laws in question may be contrary to widely accepted free speech principles.
Rosen worries that too much power to decide what can be seen is being concentrated in the hands of one company. He acknowledges that Google has behaved reasonably so far, but he worries about what might happen in the future.
I understand his point, but it’s hard to see an alternative that would be better in practice. If Google, as the owner of YouTube, is not going to have this power, then the power will have to be given to somebody else. Any nominations? I don’t have any.
What we’re left with, then, is Google making the decisions. But this doesn’t mean all of us are out in the cold, without influence. As consumers of Google’s services, we have a certain amount of leverage. And this is not just hypothetical — Google’s “don’t be evil” reputation contributes greatly to the value of its brand. The moment people think Google is misbehaving is the moment they’ll consider taking their business elsewhere.
As concerned members of the public — concerned customers, from Google’s viewpoint — there are things we can do to help keep Google honest. First, we can insist on transparency, that Google reveal what it is blocking and why. Rosen describes some transparency mechanisms that are in place, such as Google’s use of the Chilling Effects website.
Second, when we use Google’s services, we can try to minimize our switching costs, so that moving to an alternative service is a realistic possibility. The less we’re locked in to Google’s service, the less we’ll feel forced to keep using those services even if the company’s behavior changes. And of course we should think carefully about switching costs in all our technology decisions, even when larger policy issues aren’t at stake.
Finally, we can make sure that Google knows we care about free speech, and about its corporate behavior generally. This means criticizing them when they slip up, and praising them when they do well. Most of all, it means debating their decisions — which Rosen’s article helpfully invites us to do.