April 24, 2024

Archives for March 2005

Pharming

Internet spoofing attacks have been getting more and more sophisticated. The latest evil trick is “Pharming,” which relies on DNS poisoning (explanation below) to trick users about which site they are viewing. Today I’ll explain what pharming is. I’ll talk about fixes later in the week.

Spoofing attacks, in general, try to get a user to think he is viewing one site (say, Citibank’s home banking site) when he is really viewing a bogus site created by a villain. The villain makes his site look just like Citibank’s site, so that the user will trust the site and enter information, such as his Citibank account number and password, into it. The villain then exploits this information to do harm.

Today most spoofing attacks use “phishing.” The villain sends the victim an email, which is forged to look like it came from the target site. (Forging email is very easy – the source and content of email messages are not verified at all.) The forged email may claim to be a customer service message asking the victim to do something on the legitimate site. The email typically contains a hyperlink purporting to go to the legitimate site but really going to the villain’s fake site. If the victim clicks the hyperlink, he sees the fake site.

The best defense against phishing is to distrust email messages, especially ones that ask you to enter sensitive information into a website, and to distrust hyperlinks in email messages. Another defense is to have your browser tell you the name of the site you are really visiting. (The browser’s Address line tries to do this, so in theory you could just look there, but various technical tricks may make this harder than you think.) Tools like SpoofStick display “You’re on freedom-to-tinker.com” in big letters at the top of your browser window, so that you’re not fooled about which site you’re viewing. The key idea in these defenses is that your browser knows which domain (e.g. “citibank.com” or “freedom-to-tinker.com”) the displayed page is coming from.

“Pharming” tries to fool your computer about where the data is coming from. It does this by attacking DNS (Domain Name Service), the service that interprets names like “freedom-to-tinker.com” for you.

The Internet uses two types of addresses to designate machines. IP addresses are numbers like 128.112.68.1. Every data packet that travels across the Internet is labeled with source and destination IP addresses, which are used to route the packet from the packet’s source to its destination.

DNS addresses are text-strings like www.citibank.com. The Internet’s routing infrastructure doesn’t know anything about DNS addresses. Instead, a DNS address must be translated into an IP address before data can be routed to it. Your browser translated the DNS address “www.freedom-to-tinker.com” into the IP address “216.157.129.231” in the process of fetching this page. To do this, your browser probably consulted one or more servers out on the Internet, to get information about proper translations.

“Pharming” attacks the translation process, to trick your computer somehow into accepting a false translation. If your computer accepts a false translation for “citibank.com,” then when you communicate with “citibank.com” your packets will go to the villain’s IP address, and not to the IP address of Citibank. I’ll omit the details of how a villain might do this, as this post is already pretty long. But here’s the scary part: if a pharming attack is successful, there is no information on your computer to indicate that anything is wrong. As far as your computer (and the software on it) is concerned, everything is working fine, and you really are talking to “citibank.com”. Worse yet, the attack can redirect all of your Citibank-bound traffic – email, online banking, and so on – to the villain’s computer.

What can be done about this problem? That’s a topic for another day.

Harvard Business School Boots 119 Applicants for "Hacking" Into Admissions Site

Harvard Business School (HBS) has rejected 119 applicants who allegedly “hacked” in to a third-party site to learn whether HBS had admitted them. An AP story, by Jay Lindsay, has the details.

HBS interacts with applicants via a third-party site called ApplyYourself. Harvard had planned to notify applicants whether they had been admitted, on March 30. Somebody discovered last week that some applicants’ admit/reject letters were already available on the ApplyYourself website. There were no hyperlinks to the letters, but a student who was logged in to the site could access his/her letter by constructing a special URL. Instructions for doing this were posted in an online forum frequented by HBS applicants. (The instructions, which no longer work due to changes in the ApplyYourself site, are reproduced here.) Students who did this saw either a rejection letter or a blank page. (Presumably the blank page meant either that HBS would admit the student, or that the admissions decision hadn’t been made yet.) 119 HBS applicants used the instructions.

Harvard has now summarily rejected all of them, calling their action a breach of ethics. I’m not so sure that the students’ action merits rejection from business school.

My first reaction on reading about this was surprise that HBS would make an admissions decision (as it apparently had done in many cases) and then wait for weeks before informing the applicant. Applicants rejected from HBS would surely benefit from learning that information as quickly as possible. Harvard had apparently gone to the trouble of telling ApplyYourself that some applicants were rejected, but they weren’t going to tell the applicants themselves!? It’s hard to see a legitimate reason for HBS to withhold this information from applicants who want it.

As far as I can tell, the only “harm” that resulted from the students’ actions is that some of them learned the information about their own status that HBS was, for no apparent reason, withholding from them. And the information was on the web already, with no password required (for students who had already logged on to their own accounts on the site).

I might feel differently if I knew that the applicants were aware that they were breaking the rules. But I’m not sure that an applicant, on being told that his letter was already on the web and could be accessed by constructing a particular URL, would necessarily conclude that accessing it was against the rules. And it’s hard to justify punishing somebody who caused no real harm and didn’t know that he was breaking the rules.

As the AP article suggests, this is an easy opportunity for HBS (and MIT and CMU, who did the same thing) to grandstand about business ethics, at low cost (since most of the applicants in question would have been rejected anyway). Stanford, on the other hand, is reacting by asking the applicants who viewed their Stanford letters to come forward and explain themselves. Now that’s a real ethics test.

Cal-Induce Bill Morphs Into Filtering Mandate

A bill in the California state senate (SB 96), previously dubbed the “Cal-Induce Act,” has now morphed via amendment into a requirement that copyright and porn filters be included in many network software programs.

Here’s the heart of the bill:

Any person or entity that [sells, advertises, or distributes] peer-to-peer file sharing software that enables its user to electronically disseminate commercial recordings or audiovisual works via the Internet or any other digital network, and who fails to incorporate available filtering technology into that software to prevent use of that software to commit an unlawful act with respect to a commercial recording or audiovisual work, or a violation of [state obscenity or computer intrusion statutes] is punishable … by a fine not exceeding [$2500], imprisonment … for a period not to exceed one year, or by both …

This section shall not apply to the following:
(A) Computer operating system or Internet browser software.
(B) An electronic mail service or Internet service provider.
(C) Transmissions via a [home network] or [LAN]. [Note: The bill uses an odd definition of “LAN” that would exclude almost all of the real LANs I know. – EF]

As used in this section, “peer to peer file sharing software” means software … the primary purpose of which … is to enable the user to connect his or her computer to a network of other computers on which the users of these computers have made available recordings or audiovisual works for electronic dissemination to other users who are connected to the network. When a transaction is complete, the user has an identical copy of the file on his or her computer and may also then disseminate the file to other users connected to the network.

The main change from the previous version of the bill is the requirement to include filtering technologies; the previous version had required instead that the person “take reasonable care in preventing” bad uses of the software. This part of the bill is odd in several ways.

First, if the system in question uses a client-server architecture (as in the original Napster system), the bill applies only to the client-side software, since only the client software meets the bill’s definition of P2P. Since the bill requires that a filter be incorporated into the P2P software, a provider could not protect itself by doing server-side filtering, even if that filtering were perfectly effective. This bill doesn’t just mandate filtering, it mandates client-side filtering.

Second, the bill apparently requires anyone who advertises or distributes P2P software to incorporate filters into it. This seems a bit odd; normally advertisers and distributors don’t control the design of the products they advertise. Typically, third party advertisers and distributors aren’t allowed to inspect a software product’s design.

Third, the “primary purpose” language is pretty hard to apply. A program’s author may have one purpose in mind; a distributor may have another purpose in mind; and users may have a variety of purposes in using the software. Of course, the software itself can’t properly be said to have a purpose, other than doing what it is programmed to do. Most P2P software is programmed to distribute whatever files its users ask it to distribute. Is purpose to be inferred from the intent of the designer, or from the design of the software itself, or from the actual use of the software by users? Each of these alternatives leads to problems of one sort or another.

Note also the clever construction of the P2P definition, which requires only that the primary purpose be to connect the user to a network where some other people are offering files to share. It does not seem to require that the primary purpose of the network be to share files, or that the primary purpose of the software be to share files, but only that the software connects the user to a network where some people are sharing files. Note also that the purpose language refers only to the transfer of audio or video files, not to the infringing transfer of such files; so even a system that did only authorized transfers would seem to be covered by the definition. Finally, note that the bill apparently requires the filters to apply to all uses of the software in question, not just uses that involve networking or file transfer.

Fourth, it’s not clear what the bill says about situations where there is no workable filtering software, or where the only available filtering software is seriously flawed. Is there an obligation to install some filtering software, even if doesn’t work very well, and even if it makes the P2P software unusable in practice? The bill’s language seems to assume that there is available filtering software that is known to work well, which is not necessarily the case.

The new version of the bill also adds enumerated exceptions for operating system or web browser software, email services, ISPs, home networks, and LANs (though the bill’s quirky definition of “LAN” would exclude most LANs I know of). As usual, it’s not a good sign when you have to create explicit exceptions for commonly used products like these. The definition still seems likely to ensnare new legitimate communication technologies.

(Thanks to Morgan Woodson (creator of an amusing Induce Act Hearing mashup) for bringing this to my attention.)

Separating Search from File Transfer

Earlier this week, Grokster and StreamCast filed their main brief with the Supreme Court. The brief’s arguments are mostly predictable (but well argued).

There’s an interesting observation buried in the Factual Background (on pp. 2-3):

What software like respondents’ adds to [a basic file transfer] capability is, at bottom, a mechanism for efficiently finding other computer users who have files a user is seeking….

Software to search for information on line … is itself hardly new. Yahoo, Google, and others enable searching. Those “search engines,” however, focus on the always-on “servers” on the World Wide Web…. The software at issue here extends the reach of searches beyond centralized Web servers to the computers of ordinary users who are on line….

It’s often useful to think of a file sharing system as a search facility married to a file transfer facility. Some systems only try to innovate in one of the two areas; for example, BitTorrent was a major improvement in file transfer but didn’t really have a search facility at all.

Indeed, one wonders why the search and file transfer capabilities aren’t more often separated as a matter of engineering. Why doesn’t someone build a distributed Web searching system that can cope with many unreliable servers? Such a system would let ordinary users find files shared from the machines of other ordinary users, assuming that the users ran little web servers. (Running a small, simple web server can be made easy enough for any user to do.)

On the Web, file transfer and search are separated, and this has been good for users. Files are transferred via a standard protocol, HTTP, but there is vigorous competition between search engines. The same thing could happen in the file sharing world. In the file sharing world, the search engines would presumably be decentralized. But then again, big Web search engines are decentralized in the sense that they consist of very large numbers of machines scattered around the world – they’re physically decentralized but under centralized control.

Why haven’t file sharing systems been built using separate products for search and file transfer? That’s an interesting question to think about. I haven’t figured out the answer yet.

Boosting

Congratulations to my Princeton colleague Rob Schapire on winning ACM’s prestigious Kanellakis Award (shared with Columbia’s Yoav Freund). The annual award is given for a contribution to theoretical computer science that has a significant practical impact. Schapire and Freund won this year for an idea called boosting, so named because it can take a mediocre machine learning algorithm and automatically “boost” it into a really good one. The basic idea is cool, and not too hard to understand at a basic level.

A common type of machine learning problem involves learning how to classify objects based on examples. A learning algorithm (i.e., a computer program) is shown a bunch of example objects, each of which falls into one of two categories. Each example object has a label saying which category it is in. Each object can be labeled with an “importance weight” that tells us the relative importance of categorizing that object correctly; objects with higher weights are more important. The machine learning algorithm’s job is to figure out a rule that can be used to distinguish the two categories of objects, so that it can categorize objects that it hasn’t seen before. The algorithm isn’t told what to look for, but has to figure that out for itself.

Any fool can “solve” this problem by creating a rule that just guesses at random. That method will get a certain number of cases right. But can we do better than random? And if so, how much better?

Suppose we have a machine learning algorithm that does just a little better than random guessing for some class of problems. Schapire and Freund figured out a trick for “boosting” the performance of any such algorithm. To use their method, start by using the algorithm on the example data to deduce a rule. Call this Rule 1. Now look at each example object and see whether Rule 1 categorizes it correctly. If Rule 1 gets an object right, then lower that object’s importance weight a little; if Rule 1 gets an object wrong, then raise that object’s weight a little. Now run the learning algorithm again on the objects with the tweaked weights, to deduce another rule. Call this Rule 1a.

Intuitively, Rule 1a is just like Rule 1, except that Rule 1a pays extra attention to the examples that Rule 1 got wrong. We can think of Rule 1a as a kind of correction factor that is designed to overcome the mistakes of Rule 1. What Shapire and Freund proved is that if you combine Rule 1 and Rule 1a in a certain special way, the combined rule that results is guaranteed to be more accurate than Rule 1. This trick takes Rule 1, a mediocre rule, and makes it better.

The really cool part is that you can then take the improved rule and apply the same trick again, to get another rule that is even better. In fact, you can keep using the trick over and over to get rules that are better and better.

Stated this way, the idea doesn’t seem to complicated. Of course, the devil is in the details. What makes this discovery prize-worthy is that Schapire and Freund worked out the details of exactly how to tweak the weights and exactly how to combine the partial rules – and they proved that the method does indeed yield a better rule. That’s a very nice bit of computer science.

(Princeton won more of the major ACM awards than any other institution this year. Besides Rob Schapire’s award, Jennifer Rexford won the Grace Murray Hopper award (for contributions by an under-35 computer scientist) for her work on Internet routing protocols, and incoming faculty member Boaz Barak won the Dissertation Award for some nice crypto results. Not that we’re gloating or anything….)