November 22, 2024

Used Hard Disks Packed with Confidential Information

Simson Garfinkel has an eye-opening piece in CSO magazine about the contents of used hard drives. Simson bought a pile of used hard drives and systematically examined them to see what could be recovered from them.

I took the drives home and started my own forensic analysis. Several of the drives had source code from high-tech companies. One drive had a confidential memorandum describing a biotech project; another had internal spreadsheets belonging to an international shipping company.

Since then, I have repeatedly indulged my habit for procuring and then analyzing secondhand hard drives. I bought recycled drives in Bellevue, Wash., that had internal Microsoft e-mail (somebody who was working from home, apparently). Drives that I found at an MIT swap meet had financial information on them from a Boston-area investment firm.

One of the drives once lived in an ATM. It contained a year’s worth of financial transactions

Security Attacks on Security Software

A new computer worm infects PCs by attacking security software, according to a Brian Krebs story in Saturday’s Washington Post. The worm exploits flaws in two personal firewall products, made by Black Ice and Real Secure Internet. Just to be clear: the firewalls’ flaw is not that they fail to stop the worm, but that they actively create a hole that the worm exploits. People who didn’t buy these firewalls are safe from the worm.

This has to be really embarrassing for the vendor, ISS. The last thing a security product should do is to create more vulnerabilities.

This problem is not unique. Last week, another security product, Norton Internet Security, had a vulnerability reported.

Consumers are still better off, on balance, using PC security products. On the whole, these products close more holes than they open. But this is a useful reminder that all network software caries risks. Careful software engineering is needed everywhere, and especially for security products.

ATM Crashes to Windows Desktop

Yesterday, an ATM in Baker Hall at Carnegie Mellon University crashed, or had some kind of software error, and ended up displaying the Windows XP desktop. Some students started Windows Media Player on it, playing a song that comes preinstalled on Windows XP machines. Students took photos and movies of this.

There’s no way to tell whether the students, starting with the Windows desktop, would have been able to eject the ATM’s stock of cash. As my colleague Andrew Appel observes, it’s possible to design an ATM in a way that prevents it from dispensing cash without the knowledge and participation of a computer back at the bank. For example, the cash dispensing hardware could require some cryptographic message from the bank’s computer before doing anything. Then again, it’s possible to design a Windows-based ATM that never (or almost never) displays the Windows desktop, failing instead into a “technical difficulties – please call customer service” screen, and the designers apparently didn’t adopt that precaution.

A single, isolated failure like this isn’t, in itself, a big deal. Every ATM transaction is recorded and audited. Banks have the power to adopt loss-prevention technology; they have good historical data on error rates and losses; and they absorb the cost of both losses and loss-prevention technology. So it seems safe to assume that they are managing these kinds of risks rationally.

An Inexhaustible Supply of Bugs

Eric Rescorla recently released an interesting paper analyzing data on the discovery of security bugs in popular products. I have some minor quibbles with the paper’s main argument (and I may write more about that later) but the data analysis alone makes the paper worth reading. Briefly, what Eric did is to take data about reported security vulnerabilities, and fit it to a standard model of software reliability. This allowed him to estimate the number of security bugs in popular software products and the rate at which those bugs will be found in the future.

When a product version is shipped, it contains a certain number of security bugs. Over time, some of these bugs are found and fixed. One hopes that the supply of bugs is depleted over time, so that it gets harder (for both the good guys and the bad guys) to find new bugs.

The first conclusion from Eric’s analysis is that there are many, many security bugs. This confirms the expectations of many security experts. My own rule of thumb is that typical release-quality industrial code has about one serious security bug per 3,000 lines of code. A product with tens of millions of lines of code will naturally have thousands of security bugs.

The second conclusion is a bit more surprising: there is little if any depletion of the bug supply. Finding and fixing bugs seems to have a small effect, or no effect at all, on the rate at which new bugs are discovered. It seems that the supply of security bugs is practically inexhaustible.

If true, this conclusion has profound implications for how we think about software security. It implies that once a version of a software product is shipped, there is nothing anybody can do to improve its security. Sure, we can (and should) apply software patches, but patching is just a treadmill and not a road to better security. No matter how many bugs we fix, the bad guys will find it just as easy to uncover new ones.

Utah Anti-Spyware Bill

The Utah state legislature has passed an anti-spyware bill, which now awaits the governor’s signature or veto. The bill is opposed by a large coalition of infotech companies, including Amazon, AOL, AT&T, eBay, Microsoft, Verizon, and Yahoo.

The bill bans the installation of spyware on a user’s computer. The core of the bill is its definition of “spyware”, which includes both ordinary spyware (which captures information about the user and/or his browsing habits, and sends that information back to the spyware distributor) and adware (which displays uninvited popup ads on a user’s computer, based on what the user is doing). Leaving aside the adware parts of the definition, we’re left with this:

(4) Except as provided in Subsection (5), “spyware” means software residing on a computer that:

(a) monitors the computer’s usage;
(b) (i) sends information about the computer’s usage to a remote computer or server; or [adware stuff omitted]; and
(c) does not:

(i) obtain the consent of the user, at the time of, or after installation of the software but before the software does any of the actions described in Subsection (4)(b):

(A) to a license agreement:

(I) presented in full; and
(II) written in plain language;

(B) to a notice of the collection of each specific type of information to be transmitted as a result of the software installation; [adware stuff omitted]
and

(ii) provide a method:

(A) by which a user may quickly and easily disable and remove the software from the user’s computer;
(B) that does not have other effects on the non-affiliated parts of the user’s computer; and
(C) that uses obvious, standard, usual, and ordinary methods for removal of computer software.

(5) Notwithstanding Subsection (4), “spyware” does not include:

(a) software designed and installed solely to diagnose or resolve technical difficulties;
(b) software or data that solely report to an Internet website information previously stored by the Internet website on the user’s computer, including:

(i) cookies;
(ii) HTML code; or
(iii) Java Scripts; or

(c) an operating system.

Since all spyware is banned, this amounts to a requirement that programs that meet the criteria of 4(a) and 4(b) (except those exempted by 5), must avoid 4(c) by obtaining user consent and providing a suitable removal facility.

The bill’s opponents claim the definition is overbroad and would cover many legitimate software services. If they’re right, it seems to me that the notice-and-consent requirement would be more of a burden than the removability requirement, since nearly all legitimate software is removable, either by itself or as part of a larger package in which it is embedded.

I have not seen specific examples of legitimate software that would be affected. A letter being circulated by opponents refers generically to “a host of important and beneficial Internet communication software”, that gather and communicate data that “may include information necessary to provide upgrade computer security, to protect against hacker attacks, to provide interactivity on web sites, to provide software patches, to improve Internet browser performance, or enhance search capabilities”. Can anybody think of a specific example, in which it would be burdensome to obtain the required consent or to provide the required removal facility?

[Opponents also argue that the bill’s adware language is overbroad. That in itself may be enough to oppose the bill; but I won’t discuss that aspect of the bill here.]