May 18, 2024

Content Filtering and Security

Buggy security software can make you less secure. Indeed, a growing number of intruders are exploiting bugs in security software to gain access to systems. Smart system administrators have known for a long time to be careful about deploying new “security” products.

A company called Audible Magic is trying to sell “content filtering” systems to universities and companies. The company’s CopySense product is a computer that sits at the boundary between an organization’s internal network and the Internet. CopySense watches the network traffic going by, and tries to detect P2P transfers that involve infringing content, in order to log them or block them. It’s not clear how accurate the system’s classifiers are, as Audible Magic does not allow independent evaluation. The company claims that CopySense improves security, by blocking dangerous P2P traffic.

It seems just as likely that CopySense makes enterprise networks less secure. CopySense boxes run general-purpose operating systems, so they are prone to security bugs that could allow an outsider to seize control of them. And a compromised CopySense system would be very bad news, an ideal listening post for the intruder, positioned to watch all incoming and outgoing network traffic.

How vulnerable is CopySense? We have no way of knowing, since Audible Magic doesn’t allow independent evaluation of the product. You have to sign an NDA to get access to a CopySense box.

This in itself should be cause for suspicion. Hard experience shows that companies that are secretive about the design of their security technology tend to have weaker systems than companies that are more open. If I were an enterprise network administrator, I wouldn’t trust a secret design like CopySense.

Audible Magic could remedy this problem and show confidence in their design by lifting their restrictive NDA requirements, allowing independent evaluation of their product and open discussion of its level of security. They could do this tomorrow. Until they do, their product should be considered risky.

U.S. Considering Wireless Passport Protection

The U.S. government is “taking a very serious look” at improving privacy protection for the new wireless-readable passports, according to an official quoted in a great article by Kim Zetter at Wired News. Many people, including me, have worried about the privacy implications of having passports that are readable at a distance.

The previously proposed system would transmit all of the information stored on the inside cover of the passport – name, date and place of birth, (digitzed) photo, etc. – to any device that is close enough to beam a signal to the passport and receive the passport’s return signal.

The improved system, which is called “Basic Access Control” in the specification, would use a cryptographic protocol between the passport and a reader device. The protocol would require the reader device to prove that it knew the contents of the machine-readable text on the inside cover of the passport (the bottom two lines of textish stuff on a U.S. passport), before the passport would release any information. The released information would also be encrypted so that an eavesdropper could not capture it.

I have not done a detailed security analysis of the crypto protocols, so I can’t vouch for their security. Juels, Molnar, and Wagner point out some protocol flaws (in the Basic Access Control protocol) that are probably not a big deal in practice. I’ll assume here that the protocols are secure enough.

The point of these protocols is to release the digital information only to an entity that can prove it already has had access to information on the inside of the passport. Since the information stored digitally is already visible (in analog form, at least) to somebody who has that access, the privacy risk is vastly reduced, and it becomes impossible for a stranger to read your passport without your knowledge.

You might ask what is the point of storing the information digitally when it can be read digitally only by somebody who has access to the same information in analog form. There are two answers. First, the digital form can be harder to forge, because the digital information can be digitally signed by the issuing government. Assuming the digital signature scheme is secure, this makes it impossible to modify the information in a passport or to replace the photo, steps which apparently aren’t too difficult with paper-only passports. (It’s still possible to copy a passport despite the digital signature, but that seems like a lesser problem than passport modification.) Second, the digital form is more susceptible to electronic record-keeping and lookup in databases, which serves various governmental purposes, either legitimate or (for some governments) nefarious.

The cryptographic protocols now being considered were part of the digital-passport standard already, as an optional feature that each country could choose to adopt or not. The U.S. had previously chosen not to adopt it, but is now thinking about reversing that decision. It’s good to see the government taking the passport privacy issue seriously.

Berkeley to victims of personal data theft: "Our bad"

Last week I and 98,000 other lucky individuals received the following letter:

University of California, Berkeley
Graduate Division
Berkeley, California 94720-5900

Dear John Alexander Halderman:

I am writing to advise you that a computer in the Graduate Division at UC Berkeley was stolen by an as-yet unidentified individual on March 11, 2005. The computer contained data files with names and Social Security numbers of some individuals, including you, who applied to be or who were graduate students, or were otherwise affiliated with the University of California.

At this time we have no evidence that personal data were actually retrieved or misused by any unauthorized person. However, because we take very seriously our obligation to safeguard personal information entrusted to us, we are bringing this situation to your attention along with the following helpful information.

You may want to take the precaution of placing a fraud alert on your credit file. This lets creditors know to contact you before opening new accounts in your name. This is a free service which you can use by calling one of the credit bureau telephone numbers:

Equifax 1-800-525-6285     Experian 1-888-397-3742     Trans Union 1-800-680-7289

To alert individuals that we may not have reached directly, we have issued a press release describing the theft. We encourage you to check for more details on our Web site at http://newscenter.berkeley.edu/security/grad. The following Web sites and telephone numbers also offer useful information on identity theft and consumer fraud.

California Department of Consumer Affairs, Office of Privacy Protection:
http://www.privacy.ca.gov/cover/identitytheft.htm

Federal Trade Commission’s Website on identity theft: http://www.consumer.gov/idtheft/

Social Security Administration fraud line: 1-800-269-0271

Unfortunately, disreputable persons may contact you, falsely identifying themselves as affiliated with US Berkeley and offer to help. Please be aware that UC Berkeley will only contact you if you ask us, by email or telephone, for information. We recommend that you do not release personal information in response to any contacts of this nature that you have not initiated.

UC Berkeley deeply regrets this possible breach of confidentiality. Please be assured that we have taken immediate steps to further safeguard the personal information maintained by us. If you have any questions about this matter, please feel free to contact us at or toll free at 1-800-372-5110.

Sincerely,
Jeffrey A. Reimer
Associate Dean

In a few days I’ll post more about my experience with the “fraud alert” procedure.

UPDATE 11:45pm – I should add that I gave Berkeley my ‘personal data’ when I applied to their computer science PhD program in 2003. (I ended up at Princeton.) Why, two years later, are they still holding on to this information?

Why Use Remotely-Readable Passports?

Yesterday at CFP, I saw an interesting panel on the proposed radio-enabled passports. Frank Moss, a State Department employee and accomplished career diplomat, is the U.S. government’s point man on this issue. He had the guts to show up at CFP and face a mostly hostile audience. He clearly believes that he and the government made the right decision, but I’m not convinced.

The new passports, if adopted, will contain a chip that stores everything on the passport’s information page: name, date and place of birth, and digitized photo. This information will be readable by a radio protocol. Many people worry that bad guys will detect and read passports surreptitiously, as people walk down the street.

Mr. Moss said repeatedly that the chip can only be read at a distance of 10 centimeters (four inches, for the metric-impaired), making surreptitious reading unlikely. Later in the panel, Barry Steinhardt of the ACLU did a live demo in which he read information off the proposed radio-chip at a distance of about one meter, using a reader device about the size of a (closed) laptop. I have no doubt that this distance could be increased by engineering the reader more aggressively.

There was lots of back-and-forth about partial safeguards that might be added, such as building some kind of foil or wires into the passport cover so that the chip could only be read when the passport was open. Such steps do reduce the vulnerability of using remotely-readable passports, but they don’t reduce it to zero.

In the Q&A session, I asked Mr. Moss directly why the decision was made to use a remotely readable chip rather than one that can only be read by physical contact. Technically, this decision is nearly indefensible, unless one wants to be able to read passports without notifying their owners – which, officially at least, is not a goal of the U.S. government’s program. Mr. Moss gave a pretty weak answer, which amounted to an assertion that it would have been too difficult to agree on a standard for contact-based reading of passports. This wasn’t very convincing, since the smart-card standard could be applied to passports nearly as-is – the only change necessary would be to specify exactly where on the passport the smart-card contacts would be. The standardization and security problems associated with contactless cards seem to be much more serious.

After the panel, I discussed this issue with Kenn Cukier of The Economist, who has followed the development of this technology for a while and has a good perspective on how we reached the current state. It seems that the decision to use contactless technology was made without fully understanding its consequences, relying on technical assurances from people who had products to sell. Now that the problems with that decision have become obvious, it’s late in the process and would be expensive and embarrassing to back out. In short, this looks like another flawed technology procurement program.

Network Monitoring: Harder Than It Looks

Proposals like the Cal-INDUCE bill often assume that it’s reasonably easy to monitor network traffic to block certain kinds of data from being transmitted. In fact, there are many simple countermeasures that users can (and do, if pressed) use to avoid monitoring.

As a simple example, here’s an interesting (and well known) technical trick. Suppose Alice has a message M that she wants to send to Bob. We’ll treat M as a number (bearing in mind that any digital message can be thought of as a number). Alice chooses a random number R which has the same number of digits as M. She sends the message R to Bob; then she computes X = M-R, and sends the message X to Bob. Obviously, Bob can add the two messages, R + (M-R), and the sum will be M – the message Alice originally wanted to send him.

[Details, for mathematical purists: all arithmetic is done modulo a large prime P; R is chosen randomly in [0, P-1]. When I say a value “looks random” I mean that it is indistinguishable (in the information-theoretic sense) from a random value.]

Now here’s the cool part: both of the messages that Alice sends look completely random. Obviously R looks random, because Alice generated it randomly. But it turns out that X looks random too. To be more precise: either message by itself looks completely random; only by combining the two messages can any information be extracted.

By this expedient, Alice can foil any network monitor who looks at network messages one at a time. Each individual message looks innocuous, and it is only by storing messages and combining them that a monitor can learn what Alice is really telling Bob. If Alice sends the two messages by different paths, then the monitor has to gather messages from multiple paths, and combine them, to learn what Alice is telling Bob.

It’s easy for Alice to extend this trick, to split her message M into any number of pieces. For example, Alice could split M into five pieces, by generating four random numbers, R1, R2, R3, and R4, and then computing X = M-(R1+R2+R3+R4). Given any four of these five pieces, nothing can be deduced. Only somebody who has all five pieces, and knows to combine them by addition, can extract information. So a monitor has to gather and compare many messages to see what Alice is up to, even though Alice isn’t using encryption.

There are many more technical tricks like this that are easy for Alice and Bob to adopt, but hard for network monitors to cope with. If the monitors want to engage in an arms race, they’ll lose.