April 21, 2014

avatar

Collateral Freedom in China

OpenITP has just released a new report—Collateral Freedom—that studies the state of censorship circumvention tool usage in China today. From the report’s overview:

This report documents the experiences of 1,175 Chinese Internet users who are circumventing their country’s Internet censorship—and it carries a powerful message for developers and funders of censorship circumvention tools. We believe these results show an opportunity for the circumvention tech community to build stable, long term improvements in Internet freedom in China.

The circumvention tools that work best for these users are technologically diverse, but they are united by a shared political feature: the collateral cost of choosing to block them is prohibitive for China’s censors. Our survey respondents are relying not on tools that the Great Firewall can’t block, but rather on tools that the Chinese government does not want the Firewall to block. Internet freedom for these users is collateral freedom, built on technologies and platforms that the regime finds economically or politically indispensable.

Download the full report here: http://openitp.org/?q=node/44

The study was conducted by CITP alums David Robinson and me, along with Anne An. It was managed by OpenITP, and supported by Radio Free Asia’s Open Technology Fund. We wrote it primarily for developers and funders of censorship circumvention technology projects, but it is also designed to be accessible for non-technical policymakers who are interested in Internet freedom, and for China specialists without technology background.

avatar

On kids and social networking

Sunday’s New York Times has an article about cyber-bullying that’s currently #1 on their “most popular” list, so this is clearly a topic that many find close and interesting.

The NYT article focuses on schools’ central role in policing their students social behavior. While I’m all in favor of students being taught, particularly by older peer students, the importance of self-moderating their communications, schools face a fundamental quandary:

Nonetheless, administrators who decide they should help their cornered students often face daunting pragmatic and legal constraints.

“I have parents who thank me for getting involved,” said Mike Rafferty, the middle school principal in Old Saybrook, Conn., “and parents who say, ‘It didn’t happen on school property, stay out of my life.’ ”

Judges are flummoxed, too, as they wrestle with new questions about protections on student speech and school searches. Can a student be suspended for posting a video on YouTube that cruelly demeans another student? Can a principal search a cellphone, much like a locker or a backpack?

It’s unclear. These issues have begun their slow climb through state and federal courts, but so far, rulings have been contradictory, and much is still to be determined.

Here’s one example that really bothers me:

A few families have successfully sued schools for failing to protect their children from bullies. But when the Beverly Vista School in Beverly Hills, Calif., disciplined Evan S. Cohen’s eighth-grade daughter for cyberbullying, he took on the school district.

After school one day in May 2008, Mr. Cohen’s daughter, known in court papers as J. C., videotaped friends at a cafe, egging them on as they laughed and made mean-spirited, sexual comments about another eighth-grade girl, C. C., calling her “ugly,” “spoiled,” a “brat” and a “slut.”

J. C. posted the video on YouTube. The next day, the school suspended her for two days.

“What incensed me,” said Mr. Cohen, a music industry lawyer in Los Angeles, “was that these people were going to suspend my daughter for something that happened outside of school.” On behalf of his daughter, he sued.

If schools don’t have the authority to discipline J. C., as the court apparently ruled, and her father is more interested in defending her than disciplining her for clearly inappropriate behavior, then can we find some other solution?

Of course, there’s nothing new about bullying among the early-teenage set. I will refrain from dredging such stories from my own pre-Internet pre-SMS childhood, but there’s no question that these kids are at an important stage of their lives, where they’re still learning important and essential concepts, like how to relate to their peers and the importance (or lack thereof) of their peers’ approval, much less understanding where to draw boundaries between their public self and their private feelings. It’s certainly important for us, the responsible adults of the world, to recognize that nothing we can say or do will change the fundamentally social awkwardness of this age. There will never be an ironclad solution that eliminates kids bullying, taunting, or otherwise hurting one other.

Given all that, the rise of electronic communications (whether SMS text messaging, Facebook, email, or whatever else) changes the game in one very important way. It increases the velocity of communications. Every kid now has a megaphone for reaching their peers, whether directly through a Facebook posting that can reach hundreds of friends at once or indirectly through the viral spread of embarrassing gossip from friend to friend, and that speed can cause salacious information to get around well before any traditional mechanisms (parental, school administrative, or otherwise) can clamp down and assert some measure of sanity. For possibly the ultimate example of this, see a possibly fictitious yet nonetheless illustrative girl’s written hookup list posted by her brother as a form of revenge against her ratting out his hidden stash of beer. Needless to say, in one fell swoop, this girl’s life got turned upside down with no obvious way to repair the social damage.

Alright, we invented this social networking mess. Can we fix it?

The only mechanism I feel is completely inappropriate is this:

But Deb Socia, the principal at Lilla G. Frederick Pilot Middle School in Dorchester, Mass., takes a no-nonsense approach. The school gives each student a laptop to work on. But the students’ expectation of privacy is greatly diminished.

“I regularly scan every computer in the building,” Ms. Socia said. “They know I’m watching. They’re using the cameras on their laptops to check their hair and I send them a message and say: ‘You look great! Now go back to work.’ It’s a powerful way to teach kids: ‘I’m paying attention, you need to do what’s right.’ ”

Not only do I object to the Big Brother aspect of this (do schools still have 1984 on their reading lists?), but turning every laptop into a surveillance device is a hugely tempting target for a variety of bad actors. Kids need and deserve some measure of privacy, at least to the extent that schools already give kids a measure of privacy against arbitrary and unjustified search and seizure.

Surveillance is widely considered to be more acceptable when it’s being done by parents, who might insist they have their kids’ passwords in order to monitor them. Of course, kids of this age will reasonably want or need to have privacy from their parents as well (e.g., we don’t want to create conditions where victims of child abuse can be easily locked down by their family).

We could try to invent technical means to slow down the velocity of kids’ communications, which could mean adding delays as a function of the fanout of a message, or even giving viewers of any given message a kill switch over it, that could reach back and nuke earlier, forwarded copies to other parties. Of course, such mechanisms could be easily abused. Furthermore, if Facebook were to voluntarily create such a mechanism, kids might well migrate to other services that lack the mechanism. If we legislate that children of a certain age must have technically-imposed communication limits across the board (e.g., limited numbers of SMS messages per day), then we could easily get into a world where a kid who hits a daily quota cannot communicate in an unexpectedly urgent situation (e.g., when stuck at an alcoholic party and needing a sober ride home).

Absent any reasonable technical solution, the proper answer is probably to restrict our kids’ access to social media until we think they’re mature enough to handle it, to make sure that we, the parents, educate them about the proper etiquette, and that we take responsibility for disciplining our kids when they misbehave.

avatar

iPad: The Disneyland of Computers

Tech commentators have a love/hate relationship with Apple’s new iPad. Those who try it tend to like it, but many dislike its locked-down App Store which only allows Apple-approved apps. Some people even see the iPad as the dawn of a new relationship between people and computers.

To me, the iPad is Disneyland.

I like Disneyland. It’s clean, safe, and efficient. There are lots of entertaining things to do. Kids can drive cars; adults can wear goofy hats with impunity. There’s a parade every afternoon, and an underground medical center in case you get sick.

All of this is possible because of central planning. Every restaurant and store on Disneyland’s Main Street is approved in advance by Disney. Every employee is vetted by Disney. Disneyland wouldn’t be Disneyland without central planning.

I like to visit Disneyland, but I wouldn’t want to live there.

There’s a reason the restaurants in Disneyland are bland and stodgy. It’s not just that centralized decision processes like Disney’s have trouble coping with creative, nimble, and edgy ideas. It’s also that customers know who’s in charge, so any bad dining experience will be blamed on Disney, making Disney wary of culinary innovation. In Disneyland the trains run on time, but they take you to a station just like the one you left.

I like living in a place where anybody can open a restaurant or store. I like living in a place where anybody can open a bookstore and sell whatever books they want. Here in New Jersey, the trains don’t always run on time, but they take you to lots of interesting places.

The richness of our cultural opportunities, and the creative dynamism of our economy, are only possible because of a lack of central planning. Even the best central planning process couldn’t hope to keep up with the flow of new ideas.

The same is true of Apple’s app store bureaucracy: there’s no way it can keep up with the flow of new ideas — no way it can offer the scope and variety of apps that a less controlled environment can provide. And like the restaurants of Disneyland, the apps in Apple’s store will be blander because customers will blame the central planner for anything offensive they might say.

But there’s a bigger problem with the argument offered by central planning fanboys. To see what it is, we need to look more carefully at why Disneyland succeeded when so many centrally planned economies failed so dismally.

What makes Disneyland different is that it is an island of central planning, embedded in a free society. This means that Disneyland can seek its suppliers, employees, and customers in a free economy, even while it centrally plans its internal operations. This can work well, as long as Disneyland doesn’t get too big — as long as it doesn’t try to absorb the free society around it.

The same is true of Apple and the iPad. The whole iPad ecosystem, from the hardware to Apple’s software to the third-party app software, is only possible because of the robust free-market structures that create and organize knowledge, and mobilize workers, in the technology industry. If Apple somehow managed to absorb the tech industry into its centrally planned model, the result would be akin to Disneyland absorbing all of America. That would be enough to frighten even the most rabid fanboy, but fortunately it’s not at all likely. The iPad, like Disneyland, will continue to be an island of central planning in a sea of decentralized innovation.

So, iPad users, enjoy your trip to Disneyland. I understand why you’re going there, and I might go there one day myself. But don’t forget: there’s a big exciting world outside, and you don’t want to miss it.

avatar

A Free Internet, If We Can Keep It

“We stand for a single internet where all of humanity has equal access to knowledge and ideas. And we recognize that the world’s information infrastructure will become what we and others make of it. ”

These two sentences, from Secretary of State Clinton’s groundbreaking speech on Internet freedom, sum up beautifully the challenge facing our Internet policy. An open Internet can advance our values and support our interests; but we will only get there if we make some difficult choices now.

One of these choices relates to anonymity. Will it be easy to speak anonymously on the Internet, or not? This was the subject of the first question in the post-speech Q&A:

QUESTION: You talked about anonymity on line and how we have to prevent that. But you also talk about censorship by governments. And I’m struck by – having a veil of anonymity in certain situations is actually quite beneficial. So are you looking to strike a balance between that and this emphasis on censorship?

SECRETARY CLINTON: Absolutely. I mean, this is one of the challenges we face. On the one hand, anonymity protects the exploitation of children. And on the other hand, anonymity protects the free expression of opposition to repressive governments. Anonymity allows the theft of intellectual property, but anonymity also permits people to come together in settings that gives them some basis for free expression without identifying themselves.

None of this will be easy. I think that’s a fair statement. I think, as I said, we all have varying needs and rights and responsibilities. But I think these overriding principles should be our guiding light. We should err on the side of openness and do everything possible to create that, recognizing, as with any rule or any statement of principle, there are going to be exceptions.

So how we go after this, I think, is now what we’re requesting many of you who are experts in this area to lend your help to us in doing. We need the guidance of technology experts. In my experience, most of them are younger than 40, but not all are younger than 40. And we need the companies that do this, and we need the dissident voices who have actually lived on the front lines so that we can try to work through the best way to make that balance you referred to.

Secretary Clinton’s answer is trying to balance competing interests, which is what good politicians do. If we want A, and we want B, and A is in tension with B, can we have some A and some B together? Is there some way to give up a little A in exchange for a lot of B? That’s a useful way to start the discussion.

But sometimes you have to choose — sometimes A and B are profoundly incompatible. That seems to be the case here. Consider the position of a repressive government that wants to spy on a citizen’s political speech, as compared to the position of the U.S. government when it wants to eavesdrop on a suspect’s conversations under a valid search warrant. The two positions are very different morally, but they are pretty much the same technologically. Which means that either both governments can eavesdrop, or neither can. We have to choose.

Secretary Clinton saw this tension, and, being a lawyer, she saw that law could not resolve it. So she expressed the hope that technology, the aspect she understood least, would offer a solution. This is a common pattern: Given a difficult technology policy problem, lawyers will tend to seek technology solutions and technologists will tend to seek legal solutions. (Paul Ohm calls this “Felten’s Third Law”.) It’s easy to reject non-solutions in your own area because you have the knowledge to recognize why they will fail; but there must be a solution lurking somewhere in the unexplored wilderness of the other area.

If we’re forced to choose — and we will be — what kind of Internet will we have? In Secretary Clinton’s words, “the world’s information infrastructure will become what we and others make of it.” We’ll have a free Internet, if we can keep it.

avatar

No Warrant Necessary to Seize Your Laptop

The U.S. Customs may search your laptop and copy your hard drive when you cross the border, according to their policy. They may do this even if they have no particularized suspicion of wrongdoing on your part. They claim that the Fourth Amendment protection against warrantless search and seizure does not apply. The Customs justifies this policy on the grounds that “examinations of documents and electronic devices are a crucial tool for detecting information concerning” all sorts of bad things, including terrorism, drug smuggling, contraband, and so on.

Historically the job of Customs was to control the flow of physical goods into the country, and their authority to search you for physical goods is well established. I am certainly not a constitutional lawyer, but to me a Customs exemption from Fourth Amendment restrictions is more clearly justified for physical contraband than for generalized searches of information.

The American Civil Liberties Union is gathering data about how this Customs enforcement policy works in practice, and they request your help. If you’ve had your laptop searched, or if you have altered your own practices to protect your data when crossing the border, staff attorney Catherine Crump would be interested in hearing about it.

Meanwhile, the ACLU has released a stack of documents they got by FOIA request.
The documents are here, and their spreadsheets analyzing the data are here. They would be quite interested to know what F-to-T readers make of these documents.

ACLU Queries for F-to-T readers:
If the answer to any of the questions below is yes, please briefly describe your experience and e-mail your response to laptopsearch at aclu.org. The ACLU promises confidentiality to anyone responding to this request.
(1) When entering or leaving the United States, has a U.S. official ever examined or browsed the contents of your laptop, PDA, cell phone, or other electronic device?

(2) When entering or leaving the United States, has a U.S. official ever detained your laptop, PDA, cell phone, or other electronic device?

(3) In light of the U.S. government’s policy of conducting suspicionless searches of laptops and other electronic devices, have you taken extra steps to safeguard your electronic information when traveling internationally, such as using encryption software or shipping a hard drive ahead to your destination?

(4) Has the U.S. government’s policy of conducting suspicionless searches of laptops and other electronic devices affected the frequency with which you travel internationally or your willingness to travel with information stored on electronic devices?

avatar

Information Technology Policy in the Obama Administration, One Year In

[Last year, I wrote an essay for Princeton's Woodrow Wilson School, summarizing the technology policy challenges facing the incoming Obama Administration. This week they published my follow-up essay, looking back on the Administration's first year. Here it is.]

Last year I identified four information technology policy challenges facing the incoming Obama Administration: improving cybersecurity, making government more transparent, bringing the benefits of technology to all, and bridging the culture gap between techies and policymakers. On these issues, the Administration’s first-year record has been mixed. Hopes were high that the most tech-savvy presidential campaign in history would lead to an equally transformational approach to governing, but bold plans were ground down by the friction of Washington.

Cybersecurity : The Administration created a new national cybersecurity coordinator (or “czar”) position but then struggled to fill it. Infighting over the job description — reflecting differences over how to reconcile security with other economic goals — left the czar relatively powerless. Cyberattacks on U.S. interests increased as the Adminstration struggled to get its policy off the ground.

Government transparency: This has been a bright spot. The White House pushed executive branch agencies to publish more data about their operations, and created rules for detailed public reporting of stimulus spending. Progress has been slow — transparency requires not just technology but also cultural changes within government — but the ship of state is moving in the right direction, as the public gets more and better data about government, and finds new ways to use that data to improve public life.

Bringing technology to all: On the goal of universal access to technology, it’s too early to tell. The FCC is developing a national broadband plan, in hopes of bringing high-speed Internet to more Americans, but this has proven to be a long and politically difficult process. Obama’s hand-picked FCC chair, Julius Genachowski, inherited a troubled organization but has done much to stabilize it. The broadband plan will be his greatest challenge, with lobbyists on all sides angling for advantage as our national network expands.

Closing the culture gap: The culture gap between techies and policymakers persists. In economic policy debates, health care and the economic crisis have understandably taken center stage, but there seems to be little room even at the periphery for the innovation agenda that many techies had hoped for. The tech policy discussion seems to be dominated by lawyers and management consultants, as in past Administrations. Too often, policymakers still see techies as irrelevant, and techies still see policymakers as clueless.

In recent days, creative thinking on technology has emerged from an unlikely source: the State Department. On the heels of Google’s surprising decision to back away from the Chinese market, Secretary of State Clinton made a rousing speech declaring Internet freedom and universal access to information as important goals of U.S. foreign policy. This will lead to friction with the Chinese and other authoritarian governments, but our principles are worth defending. The Internet can a powerful force for transparency and democratization, around the world and at home.

avatar

Robots and the Law

Stanford Law School held a panel Thursday on “Legal Challenges in an Age of Robotics“. I happened to be in town so I dropped by and heard an interesting discussion.

Here’s the official announcement:

Once relegated to factories and fiction, robots are rapidly entering the mainstream. Advances in artificial intelligence translate into ever-broadening functionality and autonomy. Recent years have seen an explosion in the use of robotics in warfare, medicine, and exploration. Industry analysts and UN statistics predict equally significant growth in the market for personal or service robotics over the next few years. What unique legal challenges will the widespread availability of sophisticated robots pose? Three panelists with deep and varied expertise discuss the present, near future, and far future of robotics and the law.

The key questions are how robots differ from past technologies, and how those differences change the law and policy issues we face.

Three aspects of robots seemed to recur in the discussion: robots take action that is important in the world; robots act autonomously; and we tend to see robots as beings and not just machines.

The last issue — robots as beings — is mostly a red herring for our purposes, notwithstanding its appeal as a conversational topic. Robots are nowhere near having the rights of a person or even of a sentient animal, and I suspect that we can’t really imagine what it would be like to interact with a robot that qualified as a conscious being. Our brains seem to be wired to treat self-propelled objects as beings — witness the surprising acceptance of robot “dogs” that aren’t much like real dogs — but that doesn’t mean we should grant robots personhood.

So let’s set aside the consciousness issue and focus on the other two: acting in the world, and autonomy. These attributes are already present in many technologies today, even in the purely electronic realm. Consider, for example, the complex of computers, network equipment, and software make up Google’s data centers. Its actions have significant implications in the real world, and it is autonomous, at least in the sense that the panelists seemed to using the term “autonomous” — it exhibits complex behavior without direct, immediate human instruction, and its behavior is often unpredictable even to its makers.

In the end, it seemed to me that the legal and policy issues raised by future robots will not be new in kind, but will just be extrapolations of the issues we’re already facing with today’s complex technologies — and not a far extrapoloation but more of a smooth progression from where we are now. These issues are important, to be sure, and I was glad to hear smart panelists debating them, but I’m not convinced yet that we need a law of the robot. When it comes to the legal challenges of technology, the future will be like the past, only more so.

Still, if talking about robots will get policymakers to pay more attention to important issues in technology policy, then by all means, let’s talk about robots.

avatar

A "Social Networking Safety Act"

At the behest of the state Attorney General, legislation to make MySpace and Facebook safer for children is gaining momentum in the New Jersey State Legislature.

The proposed Social Networking Safety Act, heavily marked-up with floor amendments, is available here. An accompanying statement describes the Legislative purpose. Explanations of the floor amendments are available here.

This bill would deputize MySpace and Facebook to serve as a branch of law enforcement. It does so in a very subtle way.

On the surface, it appears to be a perfectly reasonable response to concerns about cyberbullies in general and to the Lori Drew case in particular. New Jersey was the first state in the nation to pass Megan’s Law, requiring information about registered sex offenders to be made available to the public, and state officials hope to play a similar, pioneering role in the fight against cyberbullying.

The proposed legislation creates a civil right of action for customers who are offended by what they read on MySpace or Facebook. It allows the social network provider to sue customers who post “sexually offensive” or “harassing” communications. Here’s the statutory language:

No person shall transmit a sexually offensive communication through a social networking website to a person located in New Jersey who the actor knows or should know is less than 13 years of age, or is at least 13 but less than 16 years old and at least four years younger than the actor. A person who transmits a sexually offensive communication in violation of this subsection shall be liable to the social networking website operator in a civil action for damages of $1,000, plus reasonable attorney’s fees, for each violation. A person who transmits a sexually offensive communication in violation of this subsection shall also be liable to the recipient of the communication in a civil action for damages in the amount of $5,000, plus reasonable attorney’s fees, or actual damages…

The bill requires social network providers to design their user interfaces with icons that will allow customers to report “sexually offensive” or “harassing” communications:

A social networking website operator shall not be deemed to be in violation … if the operator maintains a reporting mechanism available to the user that meets the following requirements: (1) the social networking website displays, in a conspicuous location, a readily identifiable icon or link that enables a user or third party to report to the social networking website operator a sexually offensive communication or harassing communication transmitted through the social networking website.

Moreover, the social network provider must investigate complaints, call the police when “appropriate” and banish offenders:

A social networking website operator shall not be deemed to be in violation … if … (2) the operator conducts a review, in the most expedient time possible without unreasonable delay, of any report by a user or visitor, including investigation and referral to law enforcement if appropriate, and provides users and visitors with the opportunity to determine the status of the operator’s review or investigation of any such report.

Finally, if the social network provider fails to take action, it can be sued for consumer fraud:

[I]t shall be an unlawful practice and a violation of P.L.1960, c.39 (C.56:8-1 et seq.) [the state Consumer Fraud Act] for a social networking website operator to fail to revoke, in the most expedient time possible without unreasonable delay, the website access of any user or visitor upon receipt of information that provides a reasonable basis to conclude that the visitor has violated [this statute]“

So what’s the problem? It’s not a criminal statute, and we do want to shut down sex offenders and cyberbullies. How could anyone object to this proposed measure?

First, the proposed law puts a special burden on one specific type of technology. It’s as if the newfangledness of social networking—and its allure for kids—have made it a special target for our fears about sex offenders and cyberbullies. No similar requirements are being placed on e-mail providers, wikis, blogs or the phone company.

Second, it deputizes private companies to do the job of law enforcement. Social network providers will have to evaluate complaints and decide when to call the police.

Third, it’s the thin edge of the wedge. If social network providers have to investigate and report criminal activity, they will be enlisted to do more. Today, sex offenders and cyberbullies. Tomorrow, drug deals, terrorist threats and pornography.

Fourth, this raises First Amendment concerns. Social network providers, if they are called upon to monitor and punish “offensive” and “harassing” speech, effectively become an arm of law enforcement. To avoid the risk of lawsuits under the Consumer Fraud Act, they will have an incentive to ban speech that is protected under the First Amendment.

Fifth, the definitions of “offensive” and “harassing” are vague. The bill invokes the “reasonable person” standard, which is okay for garden-variety negligence cases, but not for constitutional issues like freedom of speech. It’s not clear just what kinds of communication will expose customers to investigation or liability.

If the bill is enacted, MySpace and Facebook could mount a legal challenge in federal court. They could argue that Congress intended to occupy the field of internet communication, and thus pre-empt state law, when it adopted the Communications Decency Act (CDA), 47 U.S.C. § 230(c)(1).

The bill probably violates the Dormant Commerce Clause as well. It would affect interstate commerce by differentially regulating social networking websites. Social networking services outside New Jersey can simply ignore the requirements of state law. Federal courts have consistently struck down these sorts of laws, even when they are designed to protect children.

In my opinion, the proposed legislation projects our worst fears about stalkers and sex predators onto a particular technology—social networking. There are already laws that address harassment and obscenity, and internet service providers are already obliged to cooperate with law enforcement.

Studies suggest that for kids online, education is better than restriction. This is the conclusion of the Internet Safety Technical Task Force of State Attorneys General of the United States, Enhancing Child Safety and Online Technologies. According to another study funded by the MacArthur Foundation, social networking provides benefits, including opportunities for self-directed learning and independence.

avatar

Tech Policy Challenges for the Obama Administration

[Princeton's Woodrow Wilson School asked me to write a short essay on information technology challenges facing the Obama Administration, as part of the School's Inaugural activities. Here is my essay.]

Digital technologies can make government more effective, open and transparent, and can make the economy as a whole more flexible and efficient. They can also endanger privacy, disrupt markets, and open the door to cyberterrorism and cyberespionage. In this crowded field of risks and opportunities, it makes sense for the Obama administration to focus on four main challenges.

The first challenge is cybersecurity. Government must safeguard its own mission critical systems, and it must protect privately owned critical infrastructures such as the power grid and communications network. But it won’t be enough to focus only on a few high priority, centralized systems. Much of digital technology’s value—and, today, many of the threats—come from ordinary home and office systems. Government can use its purchasing power to nudge the private sector toward products that are more secure and reliable; it can convene standards discussions; and it can educate the public about basic cybersecurity practices.

The second challenge is transparency. We can harness the potential of digital technology to make government more open, leading toward a better informed and more participatory civic life. Some parts of government are already making exciting progress, and need high-level support; others need to be pushed in the right direction. One key is to ensure that data is published in ways that foster reuse, to support an active marketplace of ideas in which companies, nonprofits, and individuals can find the best ways to analyze, visualize, and “mash up” government information.

The third challenge is to maintain and increase America’s global lead in information technology, which is vital to our prosperity and our role in the world. While recommitting to our traditional strengths, we must work to broaden the reach of technology. We must bring broadband Internet connections to more Americans, by encouraging private-sector investment in high-speed network infrastructure. We must provide better education in information technology, no less than in science or math, to all students. Government cannot solve these problems alone, but can be a catalyst for progress.

The final challenge is to close the culture gap between politicians and technology leaders. The time for humorous anecdotes about politicians who “don’t get” technology, or engineers who are blind to the subtleties of Washington, is over. Working together, we can translate technological progress into smarter government and a more vibrant, dynamic private sector.

avatar

Economic Growth, Censorship, and Search Engines

Economic growth depends on an ability to access relevant information. Although censorship prevents access to certain information, the direct consequences of censorship are well-known and somewhat predictable. For example, blocking access to Falun Gong literature is unlikely to harm a country’s consumer electronics industry. On the web, however, information of all types is interconnected. Blocking a web page might have an indirect impact reaching well beyond that page’s contents. To understand this impact, let’s consider how search results are affected by censorship.

Search engines keep track of what’s available on the web and suggest useful pages to users. No comprehensive list of web pages exists, so search providers check known pages for links to unknown neighbors. If a government blocks a page, all links from the page to its neighbors are lost. Unless detours exist to the page’s unknown neighbors, those neighbors become unreachable and remain unknown. These unknown pages can’t appear in search results — even if their contents are uncontroversial.

When presented with a query, search engines respond with relevant known pages sorted by expected usefulness. Censorship also affects this sorting process. In predicting usefulness, search engines consider both the contents of pages and the links between pages. Links here are like friendships in a stereotypical high school popularity contest: the more popular friends you have, the more popular you become. If your friend moves away, you become less popular, which makes your friends less popular by association, and so on. Even people you’ve never met might be affected.

“Popular” web pages tend to appear higher in search results. Censoring a page distorts this popularity contest and can change the order of even unrelated results. As more pages are blocked, the censored view of the web becomes increasingly distorted. As an aside, Ed notes that blocking a page removes more than just the offending material. If censors block Ed’s site due to an off-hand comment on Falun Gong, he also loses any influence he has on information security.

These effects would typically be rare and have a disproportionately small impact on popular pages. Google’s emphasis on the long tail, however, suggests that considerable value lies in providing high-quality results covering even less-popular pages. To avoid these issues, a government could allow limited individuals full web access to develop tools like search engines. This approach seems likely to stifle competition and innovation.

Countries with greater censorship might produce lower-quality search engines, but Google, Yahoo, Microsoft, and others can provide high-quality search results in those countries. These companies can access uncensored data, mitigating the indirect effects of censorship. This emphasizes the significance of measures like the Global Network Initiative, which has a participant list that includes Google, Yahoo, and Microsoft. Among other things, the initiative provides guidelines for participants regarding when and how information access may be restricted. The effectiveness of this specific initiative remains to be seen, but such measures may provide leading search engines with greater leverage to resist arbitrary censorship.

Search engines are unlikely to be the only tools adversely impacted by the indirect effects of censorship. Any tool that relies on links between information (think social networks) might be affected, and repressive states place themselves at a competitive disadvantage in developing these tools. Future developments might make these points moot: in a recent talk at the Center, Ethan Zuckerman mentioned tricks and trends that might make censorship more difficult. In the meantime, however, governments that censor information may increasingly find that they do so at their own expense.