November 22, 2024

Archives for 2010

What Open Data Means to Marginalized Communities

Two symbols of this era of open data are President Obama’s Open Governance Initiative, a directive that has led agencies to post their results online and open up data sets, and Ushahidi, a tool for crowdsourcing crisis information. While these tools are bringing openness to governance and crisis response respectively, I believe we have yet to find a good answer to the question: what does open data means for the long-term social and economic development of poor and marginalized communities?

I came to Nairobi on a hunch. The hunch was that a small digital mapping experiment taking place in the Kibera slum would matter deeply, both for Kiberans who want to improve their community, and for practitioners keen to use technology to bring the voiceless into a conversation about how resources are allocated on their behalf.

So far I haven’t been disappointed. Map Kibera, an effort to create the first publicly available map of Kibera, is the brainchild of Mikel Maron, a technologist and Open Street Map founder, and Erica Hagen, a new media and development expert, and is driven by a group of 13 intrepid mappers from the Kibera community. In partnership with SODNET (an incredible local technology for social change group), Phase I was the creation of the initial map layer on Open Street Map (see Mikel’s recent presentation at Where 2.0). Phase II, with the generous support of UNICEF, will focus on making the map useful for even the most marginalized groups within the Kibera community.

What we have in mind is quite simple: add massive amounts of data to the map around 3 categories (health services, public safety/vulnerability and informal education) then experiment with ways to increase awareness and the ability to advocate for better service provision. The resulting toolbox, which will involve no tech (drawing on printed maps), and tech (SMS reporting, Ushahidi and new media creation) will help us collectively answer questions about how open data itself, and the narration of such data through citizen media and face-to-face conversations, can help even the most marginalized transform their communities.

We hope the methodology we develop, which will be captured on our wiki, can be incorporated into other communities around Kenya, and to places like Haiti, where it is critical to enable Haitians to own their own vision of a renewed nation.
cross-posted to the In An African Minute blog.

iPad: The Disneyland of Computers

Tech commentators have a love/hate relationship with Apple’s new iPad. Those who try it tend to like it, but many dislike its locked-down App Store which only allows Apple-approved apps. Some people even see the iPad as the dawn of a new relationship between people and computers.

To me, the iPad is Disneyland.

I like Disneyland. It’s clean, safe, and efficient. There are lots of entertaining things to do. Kids can drive cars; adults can wear goofy hats with impunity. There’s a parade every afternoon, and an underground medical center in case you get sick.

All of this is possible because of central planning. Every restaurant and store on Disneyland’s Main Street is approved in advance by Disney. Every employee is vetted by Disney. Disneyland wouldn’t be Disneyland without central planning.

I like to visit Disneyland, but I wouldn’t want to live there.

There’s a reason the restaurants in Disneyland are bland and stodgy. It’s not just that centralized decision processes like Disney’s have trouble coping with creative, nimble, and edgy ideas. It’s also that customers know who’s in charge, so any bad dining experience will be blamed on Disney, making Disney wary of culinary innovation. In Disneyland the trains run on time, but they take you to a station just like the one you left.

I like living in a place where anybody can open a restaurant or store. I like living in a place where anybody can open a bookstore and sell whatever books they want. Here in New Jersey, the trains don’t always run on time, but they take you to lots of interesting places.

The richness of our cultural opportunities, and the creative dynamism of our economy, are only possible because of a lack of central planning. Even the best central planning process couldn’t hope to keep up with the flow of new ideas.

The same is true of Apple’s app store bureaucracy: there’s no way it can keep up with the flow of new ideas — no way it can offer the scope and variety of apps that a less controlled environment can provide. And like the restaurants of Disneyland, the apps in Apple’s store will be blander because customers will blame the central planner for anything offensive they might say.

But there’s a bigger problem with the argument offered by central planning fanboys. To see what it is, we need to look more carefully at why Disneyland succeeded when so many centrally planned economies failed so dismally.

What makes Disneyland different is that it is an island of central planning, embedded in a free society. This means that Disneyland can seek its suppliers, employees, and customers in a free economy, even while it centrally plans its internal operations. This can work well, as long as Disneyland doesn’t get too big — as long as it doesn’t try to absorb the free society around it.

The same is true of Apple and the iPad. The whole iPad ecosystem, from the hardware to Apple’s software to the third-party app software, is only possible because of the robust free-market structures that create and organize knowledge, and mobilize workers, in the technology industry. If Apple somehow managed to absorb the tech industry into its centrally planned model, the result would be akin to Disneyland absorbing all of America. That would be enough to frighten even the most rabid fanboy, but fortunately it’s not at all likely. The iPad, like Disneyland, will continue to be an island of central planning in a sea of decentralized innovation.

So, iPad users, enjoy your trip to Disneyland. I understand why you’re going there, and I might go there one day myself. But don’t forget: there’s a big exciting world outside, and you don’t want to miss it.

CITP Expands Scope of RECAP

Today, we’re thrilled to announce the next version of our RECAP technology, dramatically expanding the scope of the project.

Having had some modest success at providing public access to legal documents, we’re now taking the next logical step, offering easy public access to illegal documents.

The Internet Archive, which graciously hosts RECAP’s repository of legal documents, was strangely unreceptive to our offer to let them store the world’s most comprehensive library of illegal documents. Fortunately, the Pirate Bay was happy to step in and help.

Interested in seeing what’s available? Then you might want to watch our brief instructional video.

Pseudonyms: The Natural State of Online Identity

I’ve been writing recently about the problems that arise when you try to use cryptography to verify who is at the other end of a network connection. The cryptographic math works, but that doesn’t mean you get the identity part right.

You might think, from this discussion, that crypto by itself does nothing — that cryptographic security can only be bootstrapped from some kind of real-world identity verification. That’s the way it works for website certificates, where a certificate authority has to check your bona fides before it will issue you a certificate.

But this intuition turns out to be wrong. There is one thing that crypto can do perfectly, without any real-world support: providing pseudonyms. Indeed, crypto is so good at supporting pseudonyms that we can practically say that pseudonyms are the natural state of identity online.

To explain why this is true, I need to offer a gentle introduction to a basic crypto operation: digital signatures. Suppose John Doe (“JD”) wants to use digital signatures. First, JD needs to create a private cryptographic key, which he does by generating some random numbers and combining them according to a special geeky recipe. The result is a unique private key that only JD knows. Next, JD uses a certain procedure to determine the public key that corresponds to his private key. He announces the public key to everyone. The math guarantees that (1) JD’s public key is unique and corresponds to JD’s private key, and (2) a person who knows JD’s public key can’t figure out JD’s private key.

Now JD can make digital signatures. If JD wants to “sign” a certain message M, he combines M with JD’s private key in a special way, and the result is JD’s “signature on M”. Now anybody can verify the signature, using JD’s public key. Only JD can make the signature, because only JD knows JD’s private key; but anybody can verify the signature.

At no point in this process does JD tell anybody who he is — I called him “John Doe” for a reason. Indeed, JD’s public key is a perfect pseudonym: it conveys nothing about JD’s actual identity, yet it has a distinct “owner” whose presence can be verified. (“You’re really the person who created this public key? Then you should be able to make a signature on the message ‘squeamish ossifrage’ for me….”)

Using this method, anybody can make up a fresh pseudonym whenever they want. If you can generate random numbers and do some math (or have your computer do those things for you), then you can make a fresh pseudonym. You can make as many as you want, without needing to coordinate with anybody. This is all easy to do.

These methods, pseudonyms and signatures, are used even in cases where we want to verify somebody’s real-world identity. When you connect to (say) https://mail.google.com, Google’s web server gives you its public key — a pseudonym — along with a digital certificate that attests that that public key — that pseudonym — belongs to Google Inc. Binding public keys — pseudonyms — to real-world identities is tedious and messy, but of course this is often necessary in practice.

Online, identities are hard to manage. Pseudonyms are easy.

China, the Internet and Google: what I planned to say

In the run-up to and aftermath of Google’s decision yesterday to remove its Chinese search engine from China, I wrote two posts on my personal blog: Chinese netizens’ open letter to the Chinese government and Google and “One Google, One World; One China, No Google”

Today, the Congressional Executive China Commission conducted a hearing titled Google and Internet Control in China: A Nexus Between Human Rights and Trade? They had originally invited me to testify in a similarly titled hearing, “China, the Internet and Google,” which was postponed and rescheduled twice: the first attempt was foiled by the Great Snowcalypse; the second attempt scheduled for March 1st was postponed again at the last minute for some reason that isn’t entirely clear. Meanwhile I had already gone and written my testimony, improved by very helpful input from the CITP community. Unfortunately, when they rescheduled the hearing they said I was no longer invited. They wanted the hearing to have different witnesses from recent related hearings in both the House and Senate. Given that I appeared in both hearings it seems reasonable that they’d want to hear from some other people.

Given the effort that went into my testimony, however, and since it drills down in a lot more detail on China than my testimony for the other hearings, I think there is some value in my sharing it with the world. Here is the PDF and here it is as a web page. Some highlights:

From the introduction:

China is pioneering a new kind of Internet-age authoritarianism. It is demonstrating how a non-democratic government can stay in power while simultaneously expanding domestic Internet and mobile phone use.  In China today there is a lot more give-and-take between government and citizens than in the pre-Internet age, and this helps bolster the regime’s legitimacy with many Chinese Internet users who feel that they have a new channel for public discourse. Yet on the
other hand, as this Commission’s 2009 Annual Report clearly outlined, Communist Party control over the bureaucracy and courts has strengthened over the past
decade, while the regime’s institutional commitments to protect the universal rights and freedoms of all its citizens have weakened.

Google’s public complaint about Chinese cyber-attacks and censorship occurred against this backdrop.  It reflects a recognition that China’s status quo – at least when it comes to censorship, regulation,and manipulation of the Internet – is unlikely to improve any time soon, and
may in fact continue to get worse.

Overview of Chinese Internet controls

Chinese government attempts to control online speech began in the late 1990’s with a focus on the filtering or “blocking” of Internet content. Today, the government deploys an expanding repertoire of tactics.

In other words, filtering is just one of many ways that the Chinese government limits and controls speech on the Internet. The full text then gives descriptions and explanations of the other tactics, but in brief they include:

  • deletion or removal of content at the source
  • device and local-level controls
  • domain name controls
  • localized disconnection or restriction
  • self-censorship due to surveillance
  • cyber-attacks
  • government “astro-turfing” and “outreach”
  • targeted police intimidation

I then describe a number of efforts by Chinese netizens to push back against these tactics, which include (see the full text for further explanation):

  • informal anti-censorship support networks
  • distributed web-hosting assistance networks
  • crowdsourced “opposition research”
  • preservation and redistribution of censored content
  • humorous “viral” protests
  • public persuasion efforts

I end with a set of recommendations. Once again, see the full text for explanations, but here is the basic list:

  • anti-censorship tools – including outreach and education in their use
  • anonymity and security tools – to help people better defend against cyber-attacks, spyware, and surveillance
  • platforms and networks for the capture, storage, and redistribution of content that gets deleted from domestic social networking and publishing services
  • support for “opposition research” – remember the Chinese netizens who deconstructed Green Dam?
  • corporate responsibility – see Global Network Initiative, but also appropriate legislation if American and other Western Internet companies fail to accept the idea that they have some obligations as far as free expression and privacy are concerned
  • private right of action – so that Chinese victims can sue U.S. companies in U.S. courts
  • incentives for innovation by the private sector that helps Chinese Internet users access blocked sites as well as protect themselves from attacks and surveillance.

My conclusion:

Many of China’s 384 million Internet users are engaged in passionate debates about their communities’ problems, public policy concerns, and their nation’s future. Unfortunately these public discussions are skewed, blinkered, and manipulated – thanks to political censorship and surveillance. The Chinese people are proud of their nation’s achievements and generally reject critiques by outsiders even if they agree with some of them. A democratic alternative to China’s Internet-age authoritarianism will only be viable if it is conceived and built by the Chinese people from within. In helping Chinese “netizens” conduct an un-manipulated and un-censored discourse about their future, the United States will not imposing its will on the Chinese people, but rather helping the Chinese people to take ownership over their own future.