November 24, 2024

Why So Little Attention to Botnets?

Our collective battle against botnets is going badly, according to Ryan Naraine’s recent article in eWeek.

What’s that? You didn’t know we were battling botnets? You’re not alone. Though botnets are a major cause of Internet insecurity problems, few netizens know what they are or how they work.

In this context, a “bot” is a malicious software agent that gets installed on an unsuspecting user’s computer. Bots get onto computers by exploiting security flaws. Once there, they set up camp and wait unobtrusively for instructions. Bots work in groups, called “botnets”, in which many thousands of bots (hundreds of thousands, sometimes) all over the Net work together at the instruction of a remote badguy.

Botnets can send spam or carry out coordinated security attacks on targets elsewhere on the Net. Attacks launched by botnets are very hard to stop because they come from so many places all at once, and tracking down the sources just leads to innocent users with infected computers. There is an active marketplace in which botnets are sold and leased.

Estimates vary, but a reasonable guess is that between one and five percent of the computers on the net are infected with bots. Some computers have more than one bot, although bots nowadays often try to kill each other.

Bots exploit the classic economic externality of network security. A well-designed bot on your computer tries to stay out of your way, only attacking other people. An infection on your computer causes harm to others but not to you, so you have little incentive to prevent the harm.

Nowadays, bots often fight over territory, killing other bots that have infected the same machine, or beefing up the machine’s defenses against new bot infections. For example, Brian Krebs reports that some bots install legitimate antivirus programs to defend their turf.

If bots fight each other, a rationally selfish computer owner might want his computer to be infected by bots that direct their attacks outward. Such bots would help to defend the computer against other bots that might harm the computer owner, e.g. by spying on him. They’d be the online equivalent of the pilot fish that swim into sharks’ mouths with impunity, to clean the sharks’ teeth.

Botnets live today on millions of ordinary users’ computers, leading to nasty attacks. Some experts think we’re losing the war against botnets. Yet there isn’t much public discussion of the problem among nonexperts. Why not?

iPods Shipped with Worm Infection

Apple revealed yesterday that some new iPods – about 1% of the new iPod Videos shipped in the last month or so – were infected with a computer worm that will spread to Windows PCs, according to Brian Krebs at the Washington Post. Apparently a PC used to test the iPods got infected, and the worm spread to the iPods that were connected to that PC for testing.

As far as the worm is concerned, the iPod is just another storage device, like a thumb drive. The worm spreads by jumping from an infected PC to any removable storage device inserted into the PC, and then using the Windows autorun mechanism to jump from the storage device into any PC the storage device is inserted into.

Apple tried to spread the blame: “As you might imagine, we are upset at Windows for not being more hardy against such viruses, and even more upset with ourselves for not catching it.” The jab at Windows probably refers to the autorun feature, which MacOS lacks, and which is indeed a security risk. (I hear that autorun will be disabled by default in Windows Vista.) Apple also says that the infected machine belonged to a contractor, not to Apple itself. If I were a customer, I would blame Apple – it’s their job to ship a product that won’t hurt me.

As Brian Krebs reminds us, this is at least the third case of portable music players shipping with malware. Last year Creative shipped a few thousand infected players, and McDonalds Japan recently gave away spyware-infected players.

In all of these cases, the music players were not themselves infected – they didn’t run any malware but only acted as passive carriers. It didn’t matter that the music players are really little computers, because the worm treated them like dumb memory devices. But someday we’ll see a scary virus or worm that actively infects both computers and music players, jumping from one to the other and doing damage on both. Once autorun goes away, this will be the natural approach to writing player-borne malware.

In principle, any device that has updatable software might be subject to malware infections. That includes music players, voting machines, printers, and many other devices. As more devices get “smart”, we’ll see malware popping up in more and more places.

ThreeBallot and Tampering

Let’s continue our discussion (1; 2) of Rivest’s ThreeBallot voting system. I’ve criticized ThreeBallot’s apparent inability to handle write-in votes. More detailed critiques have come from Charlie Strauss (1; 2) and Andrew Appel. Their analysis (especially Charlie’s) is too extensive to repeat here, so I’ll focus on just one of Charlie’s ideas.

Recall that ThreeBallot requires each voter to mark three ballots. Each candidate must be marked on at least one ballot (call this the mandatory mark); and the voter can vote for a candidate by marking that candidate on a second ballot. I call these rules (each candidate gets either one or two marks, and at most one candidate per race gets two marks) the Constraints. Because of the Constraints, we can recover the number of votes cast for a candidate by taking the total number of marks made for that candidate, and subtracting off the number of mandatory marks (which equals the number of voters).

ThreeBallot uses optical scan technology: the voter marks paper ballots that can be read by a machine. A voter’s three ballots are initially attached together. After filling out the ballots, the voter runs them through a checker machine that verifies the Constraints. If the ballots meet the Constraints, the checker puts a red stripe onto the ballots to denote that they have been checked, separates the three ballots, and gives the voter a copy of one ballot (the voter chooses which) to take home. The voter then deposits the three red-striped ballots into a ballot box, where they will eventually be counted by a tabulator machine.

Charlie Strauss points out there is a window of vulnerability from the time the ballots are checked until they’re put in the ballot box. During this time, the voter might add marks for his desired candidate, or erase marks for undesired candidates, or just put one of the ballots in his pocket and leave. These tactics are tantamount to stuffing the ballot box.

This kind of problem, where a system checks whether some condition is true, and then later relies on that assumption still being true even though things may have changed in the meantime, is a common cause of security problems. Security folks, with our usual tin ear for terminology, call these “time of check to time of use” or TOCTTOU bugs. (Some people even try to pronounce the acronym.) Here, the problem is the (unwarranted) assumption that if the Constraints hold when the ballots are put into the checker, they will still hold when the ballots are later tabulated.

One way to address this problem is to arrange for the same machine that checks the ballots to also tabulate them, so there is no window of vulnerability. But ThreeBallot’s security depends on the checker being dumb and stateless, so that it can’t remember the ballot-sets of individual voters. The tabulator is more complicated and remembers things, but it only sees the ballots after they are separated and mixed together with other voters’ ballots. Rivest makes clear that the checker and the tabulator must be separate mechanisms.

We might try to fix the problem by having the checker spit out the checked ballots into a sealed, glass-sided chute with a ballot box at the bottom, so the voter never gets to touch the ballots after they are checked. This might seem to eliminate the window of vulnerability.

But there’s still a problem, because the ballots are scanned by two separate scanner devices, one in the checker and one in the tabulator. Inevitably the two scanners will be calibrated differently, so there are some borderline cases that look like a mark to one scanner but not to the other. This means that ballot-triples that meet the Constraints according to one scanner won’t necessarily meet them according to the other scanner. A clever voter can exploit these differences, regardless of which scanner is more permissive, to inflate the influence of his ballots. He can make his mandatory marks for disfavored candidates faint, so that the checker just barely detects them, in the hope that the tabulator will miss them. And he can add a third mark for his favored candidate, just barely too faint for the checker to see, in the hope that the tabulator will count it. If either of these hopes is fulfilled, the final vote count will be wrong.

This is just a small sample of Charlie Strauss’s critique of ThreeBallot. If you want to read more, check out Charlie’s reports (1; 2), or Andrew Appel’s.

ThreeBallot and Write-Ins

Yesterday I wrote about Ron Rivest’s ThreeBallot voting system. Today I want to start a discussion of problems with the system. (To reiterate: the purpose of this kind of criticism is not to dump on the designer but to advance our collective understanding of voting system design.) Charlie Strauss and Andrew Appel have more thorough criticisms, which I’ll get to in future posts. Today I want to explain what I think is the simplest problem with ThreeBallot: it has no natural way to handle write-in votes.

(For background on how ThreeBallot works, see the previous post.)

The basic principle of ThreeBallot voting is that each voter fills out three ballots. Every candidate’s name must be marked on either one or two of the three ballots – to vote for a candidate you mark that candidate on exactly two of the three ballots; all other candidates get marked on exactly one of the three ballots. The correctness of ThreeBallot depends on what I’ll call the Constraint: each voter creates at least one mark, and no more than two marks, for each candidate.

But how can we maintain the Constraint for write-in candidates? The no-more-than-two part is easy, but the at-least-one part seems impossible. If some joker writes in Homer Simpson on two of his ballots, does that force me and every other voter to write in Homer on one of my three ballots? And how could I, voting in the morning, know whether somebody will write in Homer later in the day?

We could give up on the Constraint for write-in candidates. But the desirable features of ThreeBallot – the combination of auditability and secrecy – depend on the Constraint.

In particular, it’s the at-least-one part of the Constraint that allows you to take home a copy of one of your ballots as a receipt. Because you have to mark at least one ballot for every candidate, a receipt showing that you marked one ballot for a particular candidate (a) doesn’t force you to help that candidate, and (b) doesn’t prove anything about how you really voted – and that’s why it’s safe to let you take a receipt. If we throw out the at-least-one rule for write-ins, then a receipt showing a write-in is proof that you really voted for that write-in candidate. And that kind of proof opens the door to coercion and vote-buying.

Alternatively, we can declare that people who cast write-in votes don’t get to take receipts. But then the mere existence of your receipt is proof that you didn’t vote for any write-in candidates. I don’t see any way out of this problem. Do you?

There’s an interesting lesson here about election security, and security in general. Systems that work well in the normal case often get in trouble when they try to handle exceptional or unusual cases. The more complicated the system is, the more likely such problems seem to be.

In the next post I’ll talk about some other instructive problems with ThreeBallot.

ThreeBallot

ThreeBallot is a new voting method from Ron Rivest that is supposed to make elections more secure without compromising voter privacy. It got favorable reviews at first – Michael Shamos even endorsed it at a congressional hearing – but further analysis shows that it has some serious problems. The story of ThreeBallot and its difficulties is a good illustration of why voting security is hard, and also (I hope) an interesting story in its own right.

One reason secure voting is hard is that it must meet seemingly contradictory goals. On the one hand, votes must be counted as cast, meaning the vote totals reported at the end are the correct summation of the ballots actually cast by the voters. The obvious way to guarantee this is to build a careful audit trail connecting each voter to a ballot and each ballot to the final tally. But this is at odds with the secret ballot requirement, which says that there can be no way to connect a particular voter to a particular ballot. Importantly, this last requirement must hold even if the voter wants to reveal his ballot, because letting a voter prove how he voted opens the door to coercion and vote-buying.

If we were willing to abandon the secret ballot, we could help secure elections by giving each voter a receipt to take home, with a unique serial number and the list of votes cast, and publishing all of the ballots (with serial numbers) on the web after the election. A voter could then check his receipt against the online ballot-list, and complain if they didn’t match. But of course the receipt violates the secret-ballot requirement.

Rivest tries to work around this by having the voter fill out three ballots. To vote for a candidate, the voter marks that candidate on exactly two of the three ballots. To vote against a candidate, the voter marks that candidate on exactly one ballot. All three ballots are put in the ballot box, but the voter gets to take home a copy of one of them (he chooses which one). At the end of election day, all ballots are published, and the voter can compare the ballot-copy he kept against the published list. If anybody modifies a ballot after it is cast, there is a one-third chance that the voter will have a copy of that ballot and will therefore be able to detect the modification. (Or so the theory goes.)

Consider a two-candidate race between George Washington and Benedict Arnold. Alice wants to cast her vote for Washington, so she marks Washington on two ballots and Arnold on one. The key property is that Alice can choose to take home a copy of a Washington ballot or a copy of an Arnold ballot – so the mark on the ballot she takes doesn’t reveal her vote. Arnold’s crooked minions can offer to pay her, or threaten to harm her, if she doesn’t produce an Arnold ballot afterward, and she can satisfy them while still casting her vote for Washington.

ThreeBallot is a clever idea and a genuine contribution to the debate about voting methods. But we shouldn’t rush out and adopt it. Other researchers, including Charlie Strauss and Andrew Appel, have some pretty devastating criticisms of it. They’ll be the topic of my next post.