November 22, 2024

Archives for 2009

China's New Mandatory Censorware Creates Big Security Flaws

Today Scott Wolchok, Randy Yao, and Alex Halderman at the University of Michigan released a report analyzing Green Dam, the censorware program that the Chinese government just ordered installed on all new computers in China. The researchers found that Green Dam creates very serious security vulnerabilities on users’ computers.

The report starts with a summary of its findings:

The Chinese government has mandated that all PCs sold in the country must soon include a censorship program called Green Dam. This software monitors web sites visited and other activity on the computer and blocks adult content as well as politically sensitive material. We examined the Green Dam software and found that it contains serious security vulnerabilities due to programming errors. Once Green Dam is installed, any web site the user visits can exploit these problems to take control of the computer. This could allow malicious sites to steal private data, send spam, or enlist the computer in a botnet. In addition, we found vulnerabilities in the way Green Dam processes blacklist updates that could allow the software makers or others to install malicious code during the update process. We found these problems with less than 12 hours of testing, and we believe they may be only the tip of the iceberg. Green Dam makes frequent use of unsafe and outdated programming practices that likely introduce numerous other vulnerabilities. Correcting these problems will require extensive changes to the software and careful retesting. In the meantime, we recommend that users protect themselves by uninstalling Green Dam immediately.

The researchers have released a demonstration attack which will crash the browser of any Green Dam user. Another attack, for which they have not released a demonstration, allows any web page to seize control of any Green Dam user’s computer.

This is a serious blow to the Chinese government’s mandatory censorware plan. Green Dam’s insecurity is a show-stopper — no responsible PC maker will want to preinstall such dangerous software. The software can be fixed, but it will take a while to test the fix, and there is no guarantee that the next version won’t have other flaws, especially in light of the blatant errors in the current version.

On China's new, mandatory censorship software

The New York Times reports that China will start requiring censorship software on PCs. One interesting quote stands out:

Zhang Chenming, general manager of Jinhui Computer System Engineering, a company that helped create Green Dam, said worries that the software could be used to censor a broad range of content or monitor Internet use were overblown. He insisted that the software, which neutralizes programs designed to override China’s so-called Great Firewall, could simply be deleted or temporarily turned off by the user. “A parent can still use this computer to go to porn,” he said.

In this post, I’d like to consider the different capabilities that software like this could give to the Chinese authorities, without getting too much into their motives.

Firstly, and most obviously, this software allows the authorities to do filtering of web sites and network services that originate inside or outside of the Great Firewall. By operating directly on a client machine, this filter can be aware of the operations of Tor, VPNs, and other firewall-evading software, allowing connections to a given target machine to be blocked, regardless of how the client tries to get there. (You can’t accomplish “surgical” Tor and VPN filtering if you’re only operating inside the network. You need to be on the end host to see where the connection is ultimately going.)

Software like this can do far more, since it can presumably be updated remotely to support any feature desired by the government authorities. This could be the ultimate “Big Brother Inside” feature. Not only can the authorities observe behavior or scan files within one given computer, but every computer now because a launching point for investigating other machines reachable over a local area network. If one such machine were connected, for example, to a private home network, behind a security firewall, the government software could still scan every other computer on the same private network, log every packet, and so forth. Would you be willing to give your friends the password to log into your private wireless network, knowing their machine might be running this software?

Perhaps less ominously, software like this could also be used to force users to install security patches, to uninstall zombie/botnet systems, and perform other sorts of remote systems administration. I can’t imagine the difficulty in trying to run the Central Government Bureau of National Systems Administration (would they have a phone number you could call to complain when your computer isn’t working, and could they fix it remotely?), but the technological base is now there.

Of course, anybody who owns their own computer will be able to circumvent this software. If you control your machine, you can control what’s running on it. Maybe you can pretend to be running the software, maybe not. That would turn into a technological arms race which the authorities would ultimately fail to win, though they might succeed in creating enough fear, uncertainty, and doubt to deter would-be circumventors.

This software will also have a notable impact in Internet cafes, schools, and other sorts of “public” computing resources, which are exactly the sorts of places that people might go when they want to hide their identity, and where the authorities could have physical audits to check for compliance.

Big Brother is watching.

Internet Voting: How Far Can We Go Safely?

Yesterday I chaired an interesting panel on Internet Voting at CFP. Participants included Amy Bjelland and Craig Stender (State of Arizona), Susan Dzieduszycka-Suinat (Overseas Vote Foundation) Avi Rubin (Johns Hopkins), and Alec Yasinsac (Univ. of South Alabama). Thanks to David Bruggeman and Cameron Wilson at USACM for setting up the panel.

Nobody advocated a full-on web voting system that would allow voting from any web browser. Instead, the emphasis was on more modest steps, aimed specifically at overseas voters. Overseas voters are a good target population, because there aren’t too many of them — making experimentation less risky — and because vote-by-mail serves them poorly.

Discussion focused on two types of systems: voting kiosks, and Internet transmission of absentee ballots.

A voting kiosk is a computer-based system, running carefully configured software, that is set up in a securable location overseas. Voters come to this location, authenticate themselves, and vote just as they would in a polling place back home. A good kiosk system keeps an electronic record, which is transmitted securely across the Internet to voting officials in the voter’s home jurisdiction. It also keeps a paper record, verifiable by the voter, which is sent back to voting officials after the elections, enabling a post-election audit. A kiosk can use optical-scan technology or it can be a touch-screen machine with a paper trail — essentially it’s a standard voting system with a paper trail, connected to home across the Internet. If the engineering is done right, if the home system that receives the electronic ballots is walled off from the central vote-tabulating system, and if appropriate post-election auditing is done, this system can be secure enough to use. All of the panelists agreed that this type of system is worth trying, at least as a pilot test.

The other approach is use ordinary absentee ballots, but to distribute them and allow voters to return them online. A voter goes to a web site and downloads a file containing an absentee ballot and a cover sheet. After printing out the file, the voter fills out the cover sheet (giving his name and other information) and the ballot. He scans the cover sheet and ballot, and uploads the scan to a web site. Election officials collect and print the resulting file, and treat the printout like an ordinary absentee ballot.

Kevin Poulsen and Eric Rescorla criticize the security of this system, and for good reason. Internet distribution of blank ballots can be secure enough, if done very carefully, but returning filled-out ballots from an ordinary computer and browser is risky. Eric summarizes the risks:

We have integrity issues here as well: as Poulsen suggests (and quotes Rubin as suggesting), there are a number of ways for things to go wrong here: an attacker could subvert your computer and have it modify the ballots before sending them; you could get phished and the phisher could modify your ballot appropriately before passing it on to the central site. Finally, the attacker could subvert the central server and modify the ballots before they are printed out.

Despite the risks, systems of this sort are moving forward in various places. Arizona has one, which Amy and Craig demonstrated for the panel’s audience, and the Overseas Vote Foundation has one as well.

Why is this less-secure alternative getting more traction than kiosk-based systems? Partly it’s due to the convenience of being able to vote from anywhere (with a Net connection) instead of having to visit a kiosk location. That’s understandable. But another part of the reason seems to be that people don’t realize what can go wrong, and how often things actually do go wrong, in online interactions.

In the end, there was a lot of agreement among the panelists — a rare occurrence in public e-voting discussions — but disagreement remained about how far we can go safely. For overseas voters at least, the gap between what is convenient and what can be made safe is smaller than it is elsewhere, but that gap does still exist.

Photo censorship vs. digital photography

On the 20th anniversary of the Tiananmen Square events (protests? uprising? insurrection? massacre?), the New York Times’ Lens Blog put up a great piece about the four different photographers who photographed the iconic “Tank Man”. Inevitably, half of the story concerns the technical details of being in the right place and having the right equipment configuration to capture the image (no small thing in the middle of a civil insurrection). The other half of the story, though, is about how the film got out of the camera and out to us. The story of Tank Man (NYT article, PBS Frontline piece) is quite amazing, by itself, but I want to focus on the photographers.

Tank Man, photo by Jeff Widener / AP

The most widely seen photo, by Jeff Widener, and all the other good coverage of Tank Man was all taken from one particular hotel, and the government security services were well aware of it. Our photographers had to get their images out. But how? Widener had a “long-haired college kid” assistant who smuggled several rolls of film in his underwear. Another photographer, Charlie Cole, wrote this:

After taking the picture of the showdown, I became concerned about the PSB’s surveillance of our activities on the balcony. I was down to three rolls of film, with two cameras. One roll held the tank encounter, while the other had other good pictures of crowd and PLA confrontations and of wounded civilians at a hospital.

I replaced the final unexposed roll into the one of the cameras, replacing the tank roll, and reluctantly left the other roll of the wounded in the other camera. I felt that if the PSB searched the room or caught me, they would look even harder if there was no film in the cameras.

I then placed the tank roll in a plastic film can and wrapped it in a plastic bag and attached it to the flush chain in the tank of the toilet. I hid my cameras as best I could in the room. Within an hour, the PSB forced their way in and started searching the room. After about five minutes, they discovered the cameras and ripped the film out of each, seemingly satisfied that they had neutralized the coverage. They then forced me to sign a confession that I had been photographing during martial law and confiscated my passport.

In both of these cases, the film was ultimately smuggled to the local bureau of the Associated Press who then processed, scanned, and transmitted the images. This leads me to wonder how this sort of thing would play out today, when photographers have digital cameras, where the bits are much easier to copy and transmit.

First, a few numbers. A “raw” image file from a modern Nikon D700 takes about 13MB and that already includes the (lossless) compression. Back in the film days, the biggest 35mm rolls could hold 36 images (maybe 38 if you were willing to push it on the edges), which tended to keep photographers’ desire to press the button in check. Today, when giant memory cards cost virtually nothing, it’s trivial for a photojournalist to generate tens of gigabytes of raw data in a day of work. So… how long does it take to transmit that much data? Let’s say a hotel’s Internet connection gives you a snappy 1.5 megabits of upstream bandwidth. That means it takes about 70 seconds to transmit one raw image.

If you fear the police will knock down your door at any moment, you don’t have time to send everything. That means that you, the photographer, have got to crunch your pictures through your laptop in a big hurry. If you’ve got the fastest cards and card reader, you’ll be able to copy the data to your hard drive at maybe three pictures per second. Got a thousand pictures on that memory card and you’re waiting a nerve-wracking six minutes to complete the copy.

At the point where you’re worried about somebody busting down the door, you’re not in the frame of mind to tweak with your exposure, color balance, and so forth. Pretty much all you’re thinking is “which one is the winner”, so you’re blasting through trying to select your favorites and then try to upload them.

Meanwhile, we need to consider the capabilities of the adversary. The PRC could well have prevented us from seeing Widener and Cole’s photos, simply by locking down the AP’s offices. (Two other photographers smuggled their raw film out of the country for external processing.) In the modern era, in a country like the PRC, they could just as well cut off the Internet altogether. (We already know that the PRC is cranking up the filtering of the Great Firewall to block Flickr, Twitter, and other services around the anniversary of the Tiananmen Square events, so it’s easy to imagine far more draconian policies.) This places our hypothetical digital photographer in much the same problematic space as the film photographers of twenty years ago. Now we need to smuggle the bits out by hand.

Traveling with film is a huge pain. Higher-speed film, and particularly black & white film, is annoyingly sensitive to airport x-ray scanners. It’s similarly sensitive to humidity and temperature. And, most important, you can’t see it or copy it until you process it, which isn’t really an option in a war zone. Instead, you’ve got the one roll with the one photo that you really want to get out. Alfred Hitchcock would call the film a MacGuffin and would spin a glorious tale around it.

Digital changes all that. Now, even if the Internet is down, the ability to copy bits is incredibly helpful to our photographer. An iPod, iPhone, or other such device will commonly have gigabytes of solid state storage within. That’s not enough room for everything, but it’s certainly enough room for the photographer to make copies of all the good stuff. Similarly, with memory cards getting so remarkably small (e.g., a Micro-SD card is 15mm x 11mm x 1mm), it’s easy to imagine smuggling them in a variety of places. Advantage to the photographer? Certainly so, but also very dependent on how much time and preparation was available before the police busted down the door. The CompactFlash cards used by most D-SLRs (43mm x 36mm x 3.3mm) are much harder to hide (e.g., you can’t just shove one into a crack in the floor).

There probably isn’t much point in trying to encrypt or hide the data. If the police are busting down your door, they’ll just take everything they can find and wipe everything before they give it back to you.

iPhone Apps: Apple Picks a Little, Talks a Little

Last week Apple, in an incident destined for the textbooks, rejected an iPhone app called Eucalyptus, which lets you download and read classic public-domain books from Project Gutenberg. The rejection meant that nobody could download or use the app (without jailbreaking their phone). Apple’s rationale? Some of the books, in Apple’s view, were inappropriate.

Apple’s behavior put me in mind of the Pick-a-Little Ladies from the classic musical The Music Man. These women, named for their signature song “Pick a Little, Talk a Little,” condemn Marian the Librarian for having inappropriate books in her library:

Maud: Professor, her kind of woman doesn’t belong on any committee. Of course, I shouldn’t tell you this but she advocates dirty books.

Harold: Dirty books?!

Alma: Chaucer!

Ethel: Rabelais!

Eulalie: Balzac!

This is pretty much the scene we saw last week, with the Eucalyptus app in the role of Marian — providing works by Chaucer, Rabelais, and Balzac — and Apple in the role of the Pick-a-Little Ladies. Visualize Steve Jobs, in his black turtleneck and jeans, transported back to 1912 Iowa and singing along with these frumpy busybodies.

Later in The Music Man, the Pick-a-Little Ladies decide that Marian is all right after all, and they praise her for offering great literature. (“The Professor told us to read those books, and we simply adored them all!”) In the same way, Apple, after the outcry over its muzzling of Eucalyptus, reverse course and un-rejected Eucalyptus. Now we can all get Chaucer! Rabelais! Balzac! on our iPhones.

But there is one important difference between Apple and the Pick-a-Little Ladies. Apple had the power to veto Eucalyptus, but the Ladies couldn’t stop Marian from offering dirty books. The Ladies were powerless because Old Man Mason had cleverly bequeathed the library building to the town but the books to Marian. In today’s terms, Mason had jailbroken the library.

All of this highlights the downside of Apple’s controlling strategy. It’s one thing to block apps that are fraudulent or malicious, but Apple has gone beyond this to set itself up as the arbiter of good taste in iPhone apps. If you were Apple, would you rather be the Pick-a-Little Ladies, pretending to sit in judgement over the town, or Old Man Mason, letting people make their own choices?