November 22, 2024

Archives for 2010

School's Laptop Spying Software Exploitable from Anywhere

This post is by Jay Novak, Jon Stribley, and J. Alex Halderman.

Absolute Manage is a remote administration program that allows sysadmins to supervise and maintain client computers over the Internet. It has been in the news since early February, when Lower Merion School District in Pennsylvania was alleged to be using it to spy on students at home via their laptop webcams. The story took a new twist last Thursday, when Threat Level reported that researchers at Leviathan Security Group had discovered serious vulnerabilities in the program. These problems let attackers carry out a number of exploits, including installing malware or running other arbitrary code on the students’ laptops. The major limitation in the reported attacks is that the bad guy needs to be on the same local network as the victim, and the program’s developers, Absolute Software, says it’s a largely theoretical threat.

Unfortunately, the security problems are worse than has been reported so far, and are far from theoretical. In fact, any machine with a public IP address running Absolute Manage can be taken over by attackers anywhere on the Internet. Such an attacker can command the machine to run arbitrary code, steal data, or take photographs using the computer’s camera.

We have been investigating Absolute Manage for several months, hoping to gain a better understanding of the security measures it employs to protect users. We are disclosing this information now because, following the Threat Level post, we believe it’s only a matter of time until real attackers discover it. Users need to be aware of the vulnerabilities and take proper measures to protect themselves.

Broken Cryptography

The security issues revolve around the way Absolute Manage encrypts commands sent to the clients. The software has two parts: the Absolute Manage Agent, which runs on client machines, and the Absolute Manage Server, which tracks clients under its supervision and sends commands to perform remote operations like installing new software. The clients and server exchange messages using a TCP-based protocol. Clients continuously listen for messages containing instructions from the server, and they also periodically send “heartbeat” messages to the server to deliver status updates and pull down any queued commands.

The programs compress and encrypt each message before transmitting it. They use an encryption algorithm called Blowfish, which is a credible, if outmoded, choice. The problem is how the keys are managed. Strong encryption protocols like SSL negotiate a new secret key for each communication session, but Absolute Manage uses the same hard coded key every time, in every client. Blowfish can use variable-sized keys from 1-56 bytes, and the programmers decided to use secret textual phrases for the keys. We know what the keys are. We won’t publish them here, but they were easy to figure out by inspecting the program files.

Using hard coded keys neutralizes the benefits of cryptography. Since the same, easy-to-discover key is used in every client, it’s straightforward for an attacker to unwrap the encrypted messages, modify them, or encrypt messages of his own. This problem is very similar to the broken cryptography in Diebold voting machines that Ari, Alex, and Ed discovered years ago. Diebold also used a simple fixed key, which allowed an attacker with access to one machine to learn the key and attack all the other machines.

The broken cryptography in Absolute Manage enables a variety of attacks. For instance, an attacker can eavesdrop on a command sent from the server and rewrite it to issue a new command that installs and executes malicious code. Or, he can act as a man-in-the-middle between the client and server and insert evil commands in response to a client heartbeat message. An attacker could easily do these by sniffing wireless LAN traffic, as in school and office environments where Absolute Manage is often deployed. These seem to be the attacks referred to in the Threat Level post.

Exploitable from Anywhere

The limitation of these attacks is that the bad guy usually needs to be on the same physical network as the victim. However, we discovered that potentially more dangerous attacks are possible. In these attacks, a bad guy anywhere on the Internet can exploit any Absolute Manage client with a publicly reachable IP address.

Here’s an example of a message from the server to the client, with the encryption and compression removed. The client tries to authenticate the command using a parameter called the SeedValue. This value is provided by the server when a client initially attempts to contact it after booting. After that, the client requires the SeedValue to be the same in subsequent commands. The client basically ignores all of the other parameters that look like they would be hard to guess, so the SeedValue is the only thing that makes it difficult for the attacker to generate his own command messages from whole cloth.

In our example message, the SeedValue is:

D969E2CD0CB67F4063F45CEAC7D145B12D76A969306AE0CE

It turns out that it is encrypted using a second hard-coded textual phrase. Decrypting it yields the following bytes:

00 00 00 00 e0 03 10 03 40 03 00 00 31 00 34 00 37 00 35 00 00 00 00 00 00

The length is misleading; the value is actually just a 16-bit unicode encoding of a 7-digit number, 1401475. This is the server’s “serial number,” which was provided by Absolute Software along with the product activation key when we purchased our license.

Thus, an attacker who wants to send arbitrary commands to Absolute Manage clients just needs to figure out the server’s serial number. One way he can do that is to guess it. The attacker could try contacting the client with different values until one of them turns out to be correct. If all serial numbers are 7 digits (like ours) or less, and there is no pattern to there assignment, then an attacker can guess among 10 million possibilities. If there is a pattern (the likely case) then the attacker’s job may be much easier. Our tests show that we can make more than 330 guesses/second over a fast network link, so even assuming no pattern an attacker could expect to succeed after about four hours of guessing. Each server uses the same serial number for all its clients, so after the attacker guesses it for one client, he can compromise all the server’s other clients without any additional guesswork.

Brute forcing the server’s serial number is one method attackers can use, but there is a much more efficient attack for targeting a large set of clients: the server will tell the correct SeedValue to any client that asks. If the attacker knows the IP address of the server a client is trying to contact, he can just impersonate a freshly-booted client and ask the server to send him the correct SeedValue. The server will respond with all the information the attacker needs to impersonate the server.

A bad guy could extend this method to target all Absolute Manage clients in one attack. He could scan the entire Internet address space to discover all hosts running Absolute Manage Server and build a list of active SeedValues. (Servers generally run on public IP addresses so that they can receive status updates from clients that are away from the local network.) Such a scan would take only a few days. The attacker could then do a second Internet-wide scan to discover Absolute Manage Clients. For each of them, he would need only a few seconds to try all the active SeedValues from his list and determine the correct one. This attack could be exploited to quickly install and run malicious code on all computers running the Absolute Manage client on publicly accessible IP addresses.

Defenses and Lessons

In the short term, users can protect themselves by uninstalling the Absolute Manage client. This might be difficult on machines with privileges locked down, so system administrators will need to help.

(Attempts to work around the problem, such as firewalling the server so it can’t be found by an Internet scan, may backfire. If the server is unreachable from outside the firewall, clients that are rebooted away from the local network will be unable to obtain a SeedValue. In this situation, the clients insecurely default to accepting arbitrary commands without even the protection of a SeedValue.)

In the long term, the solution is for Absolute Manage to adopt serious cryptographic authentication. Absolute Software says they will do this in the next version later this summer–let’s hope they get it right next time.

Remote administration products like Absolute Manage carry large risks because they intentionally create a mechanism for a remote third party to take control of the machine. This can be powerful in the right hands but devastating if exploited by attackers. There will always be a risk of abuse by authorized parties, as alleged in the students’ lawsuit against Lower Merion School District, but correctly designed technology should at least prevent unauthorized third-party attacks by making sure only authorized parties can issue commands. This requires getting authentication right–exactly what Absolute Manage failed to do.

Because of these dangers, remote administration software should be designed defensively, minimizing the risk even if the authentication fails. For example, it could only allow installation of signed binaries, or it could give users prominent notification before actions are taken so that attacks can be more easily detected.

The blatant vulnerabilities in Absolute Manage suggest that this kind of remote administration software requires greater security scrutiny. We will further discuss the problems and the lessons they carry in a forthcoming paper.

Developing Texts Like We Develop Software

Recently I was asked to speak at a conference for university librarians, about how the future of academic publication looks to me as a computer scientist. It’s an interesting question. What do computer scientists have to teach humanists about how to write? Surely not our elegant prose style.

There is something distinctive about how computer scientists write: we tend to use software development tools to “develop” our texts. This seems natural to us. A software program, after all, is just a big text, and the software developers are the authors of the text. If a tool is good for developing the large, complex, finicky text that is a program, why not use it for more traditional texts as well?

Like software developers, computer scientist writers tend to use version control systems. These are software tools that track and manage different versions of a text. What makes them valuable is not just the ability to “roll back” to old versions — you can get that (albeit awkwardly) by keeping multiple copies of a file. The big win with version control tools is the level of control they give you. Who wrote this line? What did Joe write last Tuesday? Notify me every time section 4 changes. Undo the changes Fred made last Wednesday, but leave all subsequent changes in place. And so on. Version control systems are a much more powerful relative of the “track changes” and “review” features of standard word processors.

Another big advantage of advanced version control is that it enables parallel development, a style of operation in which multiple people can work on the text, separately, at the same time. Of course, it’s easy to work in parallel. What’s hard is to merge the parallel changes into a coherent final product — which is a huge pain in the neck with traditional editing tools, but is easy and natural with a good version control system. Parallel development lets you turn out a high-quality product faster — it’s a necessity when you have hundred or thousands of programmers working on the same product — and it vastly reduces the amount of human effort spent on coordination. You still need coordination, of course, but you can focus it where it matters, on the conceptual clarity of the document, without getting distracted by version-wrangling.

Interestingly, version control and parallel development turn out to be useful even for single-author works. Version control lets you undo your mistakes, and to reconstruct the history of a problematic section. Parallel development is useful if you want to try an experiment — what happens if I swap sections 3 and 4? — and try out this new approach for a while yet retain the ability to accept or reject the experiment as a whole. These tools are so useful that experienced computer scientists tend to use them to write almost anything longer than a blog post.

While version control and parallel development have become standard in computer science writing, there are other software development practices that are only starting to cross the line into CS writing: issue tracking and the release early and often strategy.

Issue tracking systems are used to keep track of problems, bugs, and other issues that need to be addressed in a text. As with version control, you can do this manually, or rely on a simple to-do list, but specialized tools are more powerful and give you better control and better visibility into the past. As with software, issues can range from small problems (our terminology for X is confusing) to larger challenges (it would be nice if our dataset were bigger).

“Release early and often” is a strategy for rapidly improving a text by making it available to users (or readers), getting feedback, and rapidly turning out a new version that addresses the feedback. Users’ critiques become issues in the issue tracking system; authors modify the text to address the most urgent issues; and a new version is released as soon as the text stabilizes. The result is rapid improvement, aligned with the true desires of users. This approach requires the right attitude from users, who need to be willing to tolerate problems, in exchange for a promise that their critiques will be addressed promptly.

What does all of this mean for writers who are not computer scientists? I won’t be so bold as to say that the future of writing will be just exactly like software development. But I do think that the tools and techniques of software development, which are already widely used by computer scientist writers, will diffuse into more common usage. It will be hard to retrofit them into today’s large, well-established editing software, but as writing tools move into the cloud, I wouldn’t be surprised to see them take on more of the attributes of today’s software development tools.

One consequence of using these tools is that you end up with a fairly complete record of how the text developed over time, and why. Imagine having a record like that for the great works of the past. We could know what the writer did every hour, every day while writing. We could know which issues and problems the author perceived in earlier versions of the text, and how these were addressed. We could know which issues the author saw as still unfixed in the final published text. This kind of visibility will be available into our future writing — assuming we produce works that are worthy of study.

India's Electronic Voting Machines Have Security Problems

A team led by Hari Prasad, Alex Halderman, and Rop Gonggrijp released today a technical paper detailing serious security problems with the electronic voting machines (EVMs) used in India.

The independent Electoral Commission of India, which is generally well respected, has dealt poorly with previous questions about EVM security. The chair of the Electoral Commission has called the machines “infallible” and “perfect” and has rejected any suggestion that security improvements are even possible. I hope the new study will cause the EC to take a more realistic approach to EVM security.

The researchers got their hands on a real Indian EVM which they were able to examine and analyze. They were unable to extract the software running in the machine (because that would have required rendering the machine unusable for elections, which they had agreed not to do) so their analysis focused on the hardware. They were able to identify several attacks that manipulated the hardware, either by replacing components or by clamping something on to a chip on the motherboard to modify votes. They implemented demonstration attacks, actually building proof-of-concept substitute hardware and vote-manipulation devices.

Perhaps the most interesting aspect of India’s EVMs is how simple they are. Simplicity is a virtue in security as in engineering generally, and researchers (including me) who have studied US voting machines have advocated simplifying their design. India’s EVMs show that while simplicity is good, it’s not enough. Unless there is some way to audit or verify the votes, even a simple system is subject to manipulation.

If you’re interested in the details, please read the team’s paper.

The ball is now in the Election Commission’s court. Let’s hope that they take steps to address the EVM problems, to give the citizens of the world’s largest democracy the transparent and accurate elections they deserve.

The Gizmodo Warrant: Searching Journalists in the Terabyte Age

Last Friday night, police officers in California used a warrant to search the home of Jason Chen, the Gizmodo blogger who wrote about the iPhone prototype found in a Redwood City bar. Orin Kerr has written an interesting post assessing the legality of the search. I wanted to touch on an important issue he didn’t discuss: Whether the search the police are conducting is unconstitutionally overbroad.

Orin discusses two laws that specifically shield journalists from being the target of a search, the California Reporter’s Shield Law, found jointly at California Penal Code 1524(g) and California Evidence Code 1070, and the federal Privacy Protection Act (PPA), 42 U.S.C. 2000aa. Both laws were written to limit the impact of Zurcher v. Stanford Daily, a U.S. Supreme Court case authorizing the use of a warrant to search a newspaper’s offices. The Supreme Court decided Zurcher in 1978, and Congress enacted the PPA in 1980 (and amended it in unrelated ways in 1996). I’m not sure when the California law was enacted, but I bet it’s of similar vintage. In other words, all of the rules that govern police searches of news offices were created in the age of typewriters, desks, filing cabinets, and stacks of paper.

Now, flash forward thirty years. The police who searched Jason Chen’s home seized the following: A macbook, HP server, two Dell desktop computers, iPad, ThinkPad, two MacBook Pros, IOmega NAS, three external hard drives, and three flash drives. They also seized other storage-containing devices, including two digital cameras and two smart phones. If Jason Chen’s computing habits are anything like mine, the police likely seized many terabytes of disk space, storing hundreds of thousands (millions?) of files, containing information stretching back years. And they took all of this information to investigate an alleged crime (the sale of the iPhone prototype) that could not have happened more than 37 days before the search (the iPhone was found on March 18th), which they learned about from a blog post published four days before the search.

I’m deeply concerned about overbreadth as the police begin to search through these terabytes of information. The police now possess, intermingled with the evidence of the alleged crime they are investigating, hundreds of thousands of documents belonging to a journalist/blogger that are utterly irrelevant to their investigation. Jason Chen has been blogging for Gizmodo since 2006, and he’s probably written hundreds of stories. The police likely have thousands of email messages revealing confidential sources, detailing meetings, and trading comments with editors, and thousands of other documents bearing notes from interviews, drafts of articles, and other sensitive information. Because of Chen’s beat, some of these documents probably reveal secrets of great economic and business value in the Silicon Valley. Under traditional, outmoded Fourth Amendment rules, the police can read every single document they possess, so long as they intend only to look for evidence of the crime, and under the “plain view rule,” they can use any evidence they find of other, unrelated crimes in court against Chen or anyone else.

If the California state courts share my concerns about overbreadth, they should consider embracing the very sensible rules for search warrants for computer hard drives (in any case, not just those involving journalists) adopted last year by the Ninth Circuit in United States v. Comprehensive Drug Testing. To paraphrase, in cases involving the search and seizure of computers, the Ninth Circuit requires five things: (1) the government must waive the plain view rule, meaning they must agree not to use evidence of crimes other than the one under investigation that led to the warrant; (2) the government must wall off the forensic experts who search the hard drive from the investigating the case; (3) the government must explain the “actual risks of destruction of information” they would face if they weren’t allowed to seize entire computers; (4) the government must use a search protocol to designate what information they can give to the investigating agents; and (5) the government must destroy or return non-responsive data.

These rules are especially needed when the target of a police search is a journalist (in fact, they may not go far enough). And these rules may be required under Zurcher. In justifying the search of the newspaper’s offices in Zurcher, the Supreme Court agreed that when the Fourth Amendment’s search and seizure rules collide with First Amendment values, like freedom of the press, the “Fourth Amendment must be applied with ‘scrupulous exactitude.'” The court went on to explain why ordinary search warrants for news offices (remember, back in the age of paper files) meet this heightened standard:

There is no reason to believe, for example, that magistrates cannot guard against searches of the type, scope, and intrusiveness that would actually interfere with the timely publication of a newspaper. Nor, if the requirements of specificity and reasonableness are properly applied, policed, and observed, will there be any occasion or opportunity for officers to rummage at large in newspaper files or to intrude into or to deter normal editorial and publication decisions.

When the California state courts combine this thirty-year-old statement of the law with the modern realities of terabyte storage devices, they should hold that the Fourth Amendment requires magistrate judges to play an integral and active role in the administration of the search of Jason Chen’s computers and other storage devices. At the very least, the courts should forbid the police from looking at any file timestamped before March 18, 2010, and in addition, they should force the police to comply with the Comprehensive Drug Testing rules. In the terabyte age, these rules are necessary at a minimum to prevent the police from interfering with a free press.

Needle-in-a-Haystack Problems, and P vs. NP

Last week I wrote about needle-in-a-haystack problems, in which it’s hard to find the solution but if somebody tells you the solution it’s easy to verify. A commenter asked whether such problems are related to the P vs. NP problem, which is the most important unsolved problem in theoretical computer science. It turns out that they are related, and that needle-in-a-haystack problems are a nice framework for explaining the P vs. NP problem, which few non-experts seem to understand.

Stated simply, the P vs. NP problem asks whether needle-in-a-haystack computational problems can exist. To understand what this means, we need to take a short detour into computer science theoryland. It’s perfectly safe, but stay close to the group so you don’t wander off into a finite field…. Okay, let’s meet P and NP.

P is the set of problems that can be solved efficiently. The precise theoretical definition of “efficient” is a bit subtle: we define the “size” of a problem to be the number of characters needed to write down the problem; and we say a solution method is efficient if, when the problem is large, the method can finish in a length of time that is less than the problem size raised to some power.

For example, suppose we are given two N-digit numbers, A and B, and a 2N-digit number C, and we’re asked whether A times B equals C. It requires 4N characters to write down the problem, one character for each digit in each of the numbers. We can solve the problem by multiplying A and B, using the multiplication method we learned in school, and then comparing the result to C. The multiplication will take us roughly N-squared steps, and the comparison is faster than that. So the solution time will grow roughly as N-squared, and if N is large this is less than the third power of the problem size (i.e., less than 4N raised to the third power). Therefore multiplication is in P, which matches our intuition that we know how to multiply efficiently.

Here’s another example, the “traveling salesperson problem” (TSP): given a list of cities, a table showing the airfare between each pair of cities, and a total travel budget, is there an itinerary that visits every city on the list and returns to where it started, without exceeding the total travel budget? One way to solve this problem is to try out all of the possible itineraries, and see if there’s one that is cheap enough. That works, but it is not efficient, because its running time grows faster than any fixed power of the problem size. No efficient algorithm for the TSP is known. There might be one, but if there is one we haven’t discovered it yet. So we don’t know if the TSP is in P.

You might have guessed by now that P stands for “polynomial”, as in “solvable in polynomial time”. That is indeed the origin of the name P. And you might go on to guess that NP stands for “not polynomial” — but that would be wrong. If you must know, NP stands for “nondeterministic polynomial”. I won’t bother explaining what “nondeterministic” means here, because that has confused generations of students. Instead, let’s use an alternative, but equivalent, definition of NP.

NP is the set of problems having yes/no answers, for which, whenever the correct answer is “yes”, there is some “hint” that allows us to verify the “yes” answer efficiently. Think about the TSP: if somebody tells you a cheap itinerary, you can verify efficiently that that itinerary visits all of the cities, ends where it started, and costs less than the travel budget. So the TSP is in NP.

Multiplication is in NP as well. Given any hint (“booga booga”, say) we can verify that A times B equals C, by simply ignoring the hint, and then multiplying and comparing as above. By a similar argument, any problem in P must also be in NP.

You can see now how this connects with needles and haystacks. If a problem is in NP but not in P, it’s like a needle-in-a-haystack problem: we can verify the answer efficiently if we’re given a hint, but without a hint we can’t find the answer efficiently. So asking whether needle-in-a-haystack problems exist is exactly the same as asking whether P is different from NP.

Are P and NP different? We don’t know. Consider the Traveling Salesperson Problem. We know it’s in NP, because we can verify the answer efficiently given a hint, but we don’t know if it’s in P. We don’t know if the TSP can be solved efficiently — we haven’t found an efficient solution yet but we can’t rule out the possibility that somebody will discover one tomorrow. A lot of people have tried really hard for a long time to find an efficient solution, so most experts tend to assume that no efficient solution is possible — but the experts have been wrong before.

If, on the other hand, P and NP turn out to be equal, the consequences would be huge. For example, much of cryptography would collapse. Decrypting a message is supposed to be a needle-in-a-haystack problem—easy if you have a hint (i.e., the secret decryption key) but hard otherwise.

It’s annoying, to say the least, that such a basic question about information remains unanswered. For decades theorists have been trying to scale the P vs. NP mountain from various directions,without success. The rest of us have gotten accustomed to assuming that needle-in-a-haystack problems exist, and hoping for the best.