November 21, 2024

Archives for June 2009

CITP Seeking New Associate Director

In the next few days, I’ll be writing a post to announce CITP’s visiting fellows for the upcoming 2009-2010 academic year. But first, today, I want to let you know about a change in the Center’s leadership structure. After serving for two years as CITP’s first-ever Associate Director, David Robinson will be leaving us in August to begin law school at Yale. As a result, we are now launching a search for a new Associate Director.

As Associate Director, he helped oversee CITP’s growth into a larger, more mature organization, our move into a great new space in Sherrerd Hall, and two years of our busy activities calendar. He has been an integral part of the Center’s management and its research activities. David has done a fantastic job, and we’ll miss him, but we understand and support his decision to go on to law school as the next stage of his sure-to-be-stellar career. David will remain engaged with the Center’s research, and we expect to cross paths with him often in the future.

The new Associate Director will pick up where David leaves off, taking our Center to the next level in its development. The job is a fabulous opportunity to exercise leadership, vision and dedication: As a startup, we are improvising and learning while we grow, constantly looking for new and better ways to advance the policy debate and public understanding of digital technologies through both technical and policy research. Our first challenge was to get things started—now that we are established, a key priority for the new Associate Director will be building richer and deeper links and collaborations with other faculty members, policymakers, and the tech policy community generally. Here’s the official job description, soon to appear on the University’s “Jobs at Princeton” web site:

The Associate Director serves as a core organizer and evangelist for the Center, both on campus and beyond. Working with the existing Center staff, the Associate Director will develop, plan and execute the Center’s public activities, including lecture series, workshops and policy briefings; recruit visiting researchers and policy experts and coordinate the selection appointment process; cultivate research collaborations, joint public events and other activities to build faculty engagement in the Center; coordinate interdisciplinary grant writing as appropriate; and develop and maintain the Center’s website and other published materials.

One of David’s last projects at the Center will be to coordinate the search process for his replacement. The search will continue until the position is filled: We hope to have the new Associate Director in place by the start of the school year. Applicants should provide a cover letter, CV, and contact information for three references. These materials can be sent to David (or equivalently, once the University’s jobs site has the listing, they can also be submitted through that route). David will also be happy to answer any questions about the position.

The rise of the "nanostory"

In today’s Wall Street Journal, I offer a review of Bill Wasik’s excellent new book, And Then There’s This: How Stories Live and Die in Viral Culture. Cliff’s notes version: This is a great new take on the little cultural boomlets and cryptic fads that seem to swarm all over the Internet. The author draws on his personal experience, including his creation of the still-hilarious Right Wing New York Times. Here’s a taste from the book itself—Wasik describing his decision to create the first flash mob:

It was out of the question to create a project that might last, some new institution or some great work of art, for these would take time, exact cost, require risk, even as their odds of success hovered at nearly zero. Meanwhile, the odds of creating a short-lived sensation, of attracting incredible attention for a very brief period of time, were far more promising indeed… I wanted my new project to be what someone would call “The X of the Summer” before I even contemplated exactly what X might be.

China's New Mandatory Censorware Creates Big Security Flaws

Today Scott Wolchok, Randy Yao, and Alex Halderman at the University of Michigan released a report analyzing Green Dam, the censorware program that the Chinese government just ordered installed on all new computers in China. The researchers found that Green Dam creates very serious security vulnerabilities on users’ computers.

The report starts with a summary of its findings:

The Chinese government has mandated that all PCs sold in the country must soon include a censorship program called Green Dam. This software monitors web sites visited and other activity on the computer and blocks adult content as well as politically sensitive material. We examined the Green Dam software and found that it contains serious security vulnerabilities due to programming errors. Once Green Dam is installed, any web site the user visits can exploit these problems to take control of the computer. This could allow malicious sites to steal private data, send spam, or enlist the computer in a botnet. In addition, we found vulnerabilities in the way Green Dam processes blacklist updates that could allow the software makers or others to install malicious code during the update process. We found these problems with less than 12 hours of testing, and we believe they may be only the tip of the iceberg. Green Dam makes frequent use of unsafe and outdated programming practices that likely introduce numerous other vulnerabilities. Correcting these problems will require extensive changes to the software and careful retesting. In the meantime, we recommend that users protect themselves by uninstalling Green Dam immediately.

The researchers have released a demonstration attack which will crash the browser of any Green Dam user. Another attack, for which they have not released a demonstration, allows any web page to seize control of any Green Dam user’s computer.

This is a serious blow to the Chinese government’s mandatory censorware plan. Green Dam’s insecurity is a show-stopper — no responsible PC maker will want to preinstall such dangerous software. The software can be fixed, but it will take a while to test the fix, and there is no guarantee that the next version won’t have other flaws, especially in light of the blatant errors in the current version.

On China's new, mandatory censorship software

The New York Times reports that China will start requiring censorship software on PCs. One interesting quote stands out:

Zhang Chenming, general manager of Jinhui Computer System Engineering, a company that helped create Green Dam, said worries that the software could be used to censor a broad range of content or monitor Internet use were overblown. He insisted that the software, which neutralizes programs designed to override China’s so-called Great Firewall, could simply be deleted or temporarily turned off by the user. “A parent can still use this computer to go to porn,” he said.

In this post, I’d like to consider the different capabilities that software like this could give to the Chinese authorities, without getting too much into their motives.

Firstly, and most obviously, this software allows the authorities to do filtering of web sites and network services that originate inside or outside of the Great Firewall. By operating directly on a client machine, this filter can be aware of the operations of Tor, VPNs, and other firewall-evading software, allowing connections to a given target machine to be blocked, regardless of how the client tries to get there. (You can’t accomplish “surgical” Tor and VPN filtering if you’re only operating inside the network. You need to be on the end host to see where the connection is ultimately going.)

Software like this can do far more, since it can presumably be updated remotely to support any feature desired by the government authorities. This could be the ultimate “Big Brother Inside” feature. Not only can the authorities observe behavior or scan files within one given computer, but every computer now because a launching point for investigating other machines reachable over a local area network. If one such machine were connected, for example, to a private home network, behind a security firewall, the government software could still scan every other computer on the same private network, log every packet, and so forth. Would you be willing to give your friends the password to log into your private wireless network, knowing their machine might be running this software?

Perhaps less ominously, software like this could also be used to force users to install security patches, to uninstall zombie/botnet systems, and perform other sorts of remote systems administration. I can’t imagine the difficulty in trying to run the Central Government Bureau of National Systems Administration (would they have a phone number you could call to complain when your computer isn’t working, and could they fix it remotely?), but the technological base is now there.

Of course, anybody who owns their own computer will be able to circumvent this software. If you control your machine, you can control what’s running on it. Maybe you can pretend to be running the software, maybe not. That would turn into a technological arms race which the authorities would ultimately fail to win, though they might succeed in creating enough fear, uncertainty, and doubt to deter would-be circumventors.

This software will also have a notable impact in Internet cafes, schools, and other sorts of “public” computing resources, which are exactly the sorts of places that people might go when they want to hide their identity, and where the authorities could have physical audits to check for compliance.

Big Brother is watching.

Internet Voting: How Far Can We Go Safely?

Yesterday I chaired an interesting panel on Internet Voting at CFP. Participants included Amy Bjelland and Craig Stender (State of Arizona), Susan Dzieduszycka-Suinat (Overseas Vote Foundation) Avi Rubin (Johns Hopkins), and Alec Yasinsac (Univ. of South Alabama). Thanks to David Bruggeman and Cameron Wilson at USACM for setting up the panel.

Nobody advocated a full-on web voting system that would allow voting from any web browser. Instead, the emphasis was on more modest steps, aimed specifically at overseas voters. Overseas voters are a good target population, because there aren’t too many of them — making experimentation less risky — and because vote-by-mail serves them poorly.

Discussion focused on two types of systems: voting kiosks, and Internet transmission of absentee ballots.

A voting kiosk is a computer-based system, running carefully configured software, that is set up in a securable location overseas. Voters come to this location, authenticate themselves, and vote just as they would in a polling place back home. A good kiosk system keeps an electronic record, which is transmitted securely across the Internet to voting officials in the voter’s home jurisdiction. It also keeps a paper record, verifiable by the voter, which is sent back to voting officials after the elections, enabling a post-election audit. A kiosk can use optical-scan technology or it can be a touch-screen machine with a paper trail — essentially it’s a standard voting system with a paper trail, connected to home across the Internet. If the engineering is done right, if the home system that receives the electronic ballots is walled off from the central vote-tabulating system, and if appropriate post-election auditing is done, this system can be secure enough to use. All of the panelists agreed that this type of system is worth trying, at least as a pilot test.

The other approach is use ordinary absentee ballots, but to distribute them and allow voters to return them online. A voter goes to a web site and downloads a file containing an absentee ballot and a cover sheet. After printing out the file, the voter fills out the cover sheet (giving his name and other information) and the ballot. He scans the cover sheet and ballot, and uploads the scan to a web site. Election officials collect and print the resulting file, and treat the printout like an ordinary absentee ballot.

Kevin Poulsen and Eric Rescorla criticize the security of this system, and for good reason. Internet distribution of blank ballots can be secure enough, if done very carefully, but returning filled-out ballots from an ordinary computer and browser is risky. Eric summarizes the risks:

We have integrity issues here as well: as Poulsen suggests (and quotes Rubin as suggesting), there are a number of ways for things to go wrong here: an attacker could subvert your computer and have it modify the ballots before sending them; you could get phished and the phisher could modify your ballot appropriately before passing it on to the central site. Finally, the attacker could subvert the central server and modify the ballots before they are printed out.

Despite the risks, systems of this sort are moving forward in various places. Arizona has one, which Amy and Craig demonstrated for the panel’s audience, and the Overseas Vote Foundation has one as well.

Why is this less-secure alternative getting more traction than kiosk-based systems? Partly it’s due to the convenience of being able to vote from anywhere (with a Net connection) instead of having to visit a kiosk location. That’s understandable. But another part of the reason seems to be that people don’t realize what can go wrong, and how often things actually do go wrong, in online interactions.

In the end, there was a lot of agreement among the panelists — a rare occurrence in public e-voting discussions — but disagreement remained about how far we can go safely. For overseas voters at least, the gap between what is convenient and what can be made safe is smaller than it is elsewhere, but that gap does still exist.