November 24, 2024

Voting, Secrecy, and Phonecams

Yesterday I wrote about the recent erosion of the secret ballot. One cause is the change in voting technology, especially voting by mail. But even if we don’t change our voting technology at all, changes in other technologies are still eroding the secret ballot.

Phonecams are a good example. You probably carry into the voting booth a silent camera, built into a mobile phone, that can transmit photos around the world within seconds. Many phones can shoot movies, making it even easier to document your vote. Here is an example shot in 2004.

Could such a video be faked? Probably. But if your employer or union boss threatens your job unless you deliver a video of yourself voting “correctly”, will you bet your job that your fake video won’t be detected? I doubt it.

This kind of video recording subverts the purpose of the voting booth. The booth is designed to ensure the secret ballot by protecting voters from being observed while voting. Now a voter can exploit the privacy of the voting booth to create evidence of his vote. It’s not an exact reversal – at least the phonecam attack requires the voter’s participation – but it’s close.

One oft-suggested approach to fighting this problem is to have a way to revise your vote later, or to vote more than once with only one of the votes being real. This approach sounds promising at first, but it seems to cause other problems.

For example, imagine that you can get as many absentee ballots as you want, but only one of them counts and the others will be ignored. Now if somebody sees you complete and mail in a ballot, they can’t tell whether they saw your real vote. But if this is going to work, there must be no way to tell, just by looking at a ballot, whether it is real. The Board of Elections can’t send you an official letter saying which ballot is the real one – if they did, you could show that letter to a third party. (They could send you multiple letters, but that wouldn’t help – how could you tell which letter was the real one?) They can notify you orally, in person, but that makes it harder to get a ballot and lets the clerk at the Board of Elections quietly disenfranchise you by lying about which ballot is real.

(I’m not saying this problem is impossible to solve, only that (a) it’s harder than you might expect, and (b) I don’t know a solution.)

Approaches where you can cancel or revise your vote later have similar problems. There can’t be a “this is my final answer” button, because you could record yourself pushing it. But if there is no way to rule out later revisions to your vote, then you have to worry about somebody else coming along later and changing your vote.

Perhaps the hardest problem in voting system design is how to reconcile the secret ballot with accuracy. Methods that protect secrecy tend to undermine accuracy, and vice versa. Clever design is needed to get enough secrecy and enough accuracy at the same time. Technology seems to be making this tradeoff even nastier.

New Congress, Same Old Issues

With control of the House and Senate about to switch parties, everybody is wondering how the new management will affect their pet policy issues. Cameron Wilson has a nice forecast for tech policy issues such as competitiveness, globalization, privacy, DRM, and e-voting.

Most of these don’t break down as partisan issues – differences are larger within each party than between the two parties. So the shift in control won’t necessarily lead to any big change. But there are two factors that may shake things up.

The first factor is the acceleration of change that happens in any organization when new leadership comes in. The new boss wants to show that he differs from the old boss, especially if the old boss was fired. And the new boss gets a short grace period in which to be bold. If a policy or practice was stale and needed to be changed but the institutional ice floes were just stuck, new management may loosen them.

The second factor has to do with the individuals who will run the various committees. If you’re not a government geek, you may not realize how much the agenda on particular issues is set by House and Senate committees, and particularly by the committee chairs. For example, any e-voting legislation must pass through the House Administration Committee, so the chair of that committee can effectively block such legislation. As long as Bob Ney was chair of the committee, e-voting reform was stymied – that’s why the Holt e-voting bill could have more than half of the House members as co-sponsors without even reaching a vote. But Mr. Ney’s Abramoff problem and the change in party control will put Juanita Millender-McDonald in charge of the committee. Suddenly Ms. Millender-McDonald’s opinion on e-voting has gotten much more important.

The bottom line is that on most tech issues we don’t know what will happen. On some issues, such as the broad telecom/media/Internet reform discussion, the situation is at least as cloudy as before. Let the battles begin.

HP Spokesman Says Company Regrets Spying on Him

As most people know by now, Hewlett-Packard was recently caught spying on its directors and employees, and some reporters, using methods that are probably illegal and certainly unethical. Throughout the scandal, we’ve heard a lot from HP spokesman Mike Moeller. This got my attention because Mike was my next-door neighbor in Palo Alto during my sabbatical five years ago. Mike and I spent more than a few evening and weekend hours chatting over the fence.

Now it is reported that one of the targets of HP’s spying was … Mike Moeller. An HP internal email turned over to investigators says, “New monitoring system that captures AOL Instant Messaging is now up and running and deployed on Moeller’s computer”. The company also reportedly had a detective follow Mike at a trade show, and they acquired his private phone records.

I wouldn’t have figured Mike as the type to leak boardroom secrets to the press, and indeed the spies found he had done nothing improper.

What’s interesting is that he is still serving as spokesman for HP. I’m not sure what to make of this. He must have been unhappy about being targeted; who wouldn’t be? But the essence of the spokesman’s job is to stay on message – the company’s message, not your own. Resigning in anger is not the spokesmanlike thing to do, and can’t be a good career move.

Heads have rolled at HP over the spying incident – as they should have – but the investigation is far from over. Executives claim not to have known what was going on, and not to have known it might be illegal, but those claims are hard to believe. Why would the company’s lawyers have allowed this to happen without getting careful legal opinions in advance? The most plausible reason is that they didn’t want to find out whether the spying tactics were legal, just as the executives probably didn’t want to find out how the information they received had been collected.

Obviously HP is not the only organization that did this. The investigators HP hired had plenty of other customers, and they are only part of a larger industry of private spies. Obtaining others’ phone records by identity theft is common enough to have its own euphemism: “pretexting”.

After the fallout at HP, expect more revelations about spying by other organizations. People will be more alert for spying, and they’ll know that revealing it can bring down the mighty. Meanwhile, law enforcement will be prying open the records of the “investigators,” finding more examples of reputable organizations that wanted information but didn’t want to be told where it came from.

Eventually the scandal at HP will blow over, and Mike Moeller’s job will return to normal. But maybe he’ll think twice before sending that next email or instant message to his family.

Silver Bullet Podcast

Today we’re getting hep with the youngsters, and offering a podcast in place of the regular blog entry. Technically speaking, it’s somebody else’s podcast – Gary McGraw’s Silver Bullet – but it is a twenty-minute interview with me, much of it discussing blog-related issues. Excerpts will appear in an upcoming issue of IEEE Security & Privacy. Enjoy.

Great, Now They'll Never Give Us Data

Today’s New York Times has an interesting article by Katie Hafner on AOL’s now-infamous release of customers’ search data.

AOL’s goal in releasing the data was to help researchers by giving them realistic data to study. Today’s technologies, such as search engines, have generated huge volumes of information about what people want online and why. But most of this data is locked up in the data centers of companies like AOL, Google, and eBay, where researchers can’t use it. So researchers have been making do with a few old datasets. The lack of good data is certainly holding back progress in this important area. AOL wanted to help out by giving researchers a better dataset to work with.

Somebody at AOL apparently thought they had “anonymized” the data by replacing the usernames with meaningless numbers. That was a terrible misjudgement – if there is one thing we have learned from the AOL data, it is that people reveal a lot about themselves in their search queries. Reporters have identified at least two of the affected AOL users by name, and finding and publishing embarrassing search sequences has become a popular sport.

The article quotes some prominent researchers, including Jon Kleinberg, saying they’ll refuse to work with this data on ethical grounds. I don’t quite buy that there is an ethical duty to avoid research uses of the data. If I had a valid research use for it, I’m pretty sure I could develop my own guidelines for using it without exacerbating the privacy problem. If I had had something to do with inducing the ill-fated release of the data, I might have an obligation to avoid profiting from my participation in the release. But if the data is out there due to no fault of mine, and the abuses that occur are no fault of mine, why shouldn’t I be able to use the data responsibly, for the public good?

Researchers know that this incident will make companies even more reluctant to release data, even after anonymizing it. If you’re a search-behavior expert, this AOL data may be the last useful data you see for a long time – which is all the more reason to use it.

Most of all, the AOL search data incident reminds us of the complexity of identity and anonymity online. It should have been obvious that removing usernames wasn’t enough to anonymize the data. But this is actually a common kind of mistake – simplistic distinctions between “personally identifiable information” and other information pervade the policy discussion about privacy. The same error is common in debates about big government data mining programs – it’s not as easy as you might think to enable data analysis without also compromising privacy.

In principle, it might have been possible to transform the search data further to make it safe for release. In practice we’re nowhere near understanding how to usefully depersonalize this kind of data. That’s an important research problem in itself, which needs its own datasets to work on. If only somebody had released a huge mass of poorly depersonalized data …