November 23, 2024

Pharming

Internet spoofing attacks have been getting more and more sophisticated. The latest evil trick is “Pharming,” which relies on DNS poisoning (explanation below) to trick users about which site they are viewing. Today I’ll explain what pharming is. I’ll talk about fixes later in the week.

Spoofing attacks, in general, try to get a user to think he is viewing one site (say, Citibank’s home banking site) when he is really viewing a bogus site created by a villain. The villain makes his site look just like Citibank’s site, so that the user will trust the site and enter information, such as his Citibank account number and password, into it. The villain then exploits this information to do harm.

Today most spoofing attacks use “phishing.” The villain sends the victim an email, which is forged to look like it came from the target site. (Forging email is very easy – the source and content of email messages are not verified at all.) The forged email may claim to be a customer service message asking the victim to do something on the legitimate site. The email typically contains a hyperlink purporting to go to the legitimate site but really going to the villain’s fake site. If the victim clicks the hyperlink, he sees the fake site.

The best defense against phishing is to distrust email messages, especially ones that ask you to enter sensitive information into a website, and to distrust hyperlinks in email messages. Another defense is to have your browser tell you the name of the site you are really visiting. (The browser’s Address line tries to do this, so in theory you could just look there, but various technical tricks may make this harder than you think.) Tools like SpoofStick display “You’re on freedom-to-tinker.com” in big letters at the top of your browser window, so that you’re not fooled about which site you’re viewing. The key idea in these defenses is that your browser knows which domain (e.g. “citibank.com” or “freedom-to-tinker.com”) the displayed page is coming from.

“Pharming” tries to fool your computer about where the data is coming from. It does this by attacking DNS (Domain Name Service), the service that interprets names like “freedom-to-tinker.com” for you.

The Internet uses two types of addresses to designate machines. IP addresses are numbers like 128.112.68.1. Every data packet that travels across the Internet is labeled with source and destination IP addresses, which are used to route the packet from the packet’s source to its destination.

DNS addresses are text-strings like www.citibank.com. The Internet’s routing infrastructure doesn’t know anything about DNS addresses. Instead, a DNS address must be translated into an IP address before data can be routed to it. Your browser translated the DNS address “www.freedom-to-tinker.com” into the IP address “216.157.129.231” in the process of fetching this page. To do this, your browser probably consulted one or more servers out on the Internet, to get information about proper translations.

“Pharming” attacks the translation process, to trick your computer somehow into accepting a false translation. If your computer accepts a false translation for “citibank.com,” then when you communicate with “citibank.com” your packets will go to the villain’s IP address, and not to the IP address of Citibank. I’ll omit the details of how a villain might do this, as this post is already pretty long. But here’s the scary part: if a pharming attack is successful, there is no information on your computer to indicate that anything is wrong. As far as your computer (and the software on it) is concerned, everything is working fine, and you really are talking to “citibank.com”. Worse yet, the attack can redirect all of your Citibank-bound traffic – email, online banking, and so on – to the villain’s computer.

What can be done about this problem? That’s a topic for another day.

Harvard Business School Boots 119 Applicants for "Hacking" Into Admissions Site

Harvard Business School (HBS) has rejected 119 applicants who allegedly “hacked” in to a third-party site to learn whether HBS had admitted them. An AP story, by Jay Lindsay, has the details.

HBS interacts with applicants via a third-party site called ApplyYourself. Harvard had planned to notify applicants whether they had been admitted, on March 30. Somebody discovered last week that some applicants’ admit/reject letters were already available on the ApplyYourself website. There were no hyperlinks to the letters, but a student who was logged in to the site could access his/her letter by constructing a special URL. Instructions for doing this were posted in an online forum frequented by HBS applicants. (The instructions, which no longer work due to changes in the ApplyYourself site, are reproduced here.) Students who did this saw either a rejection letter or a blank page. (Presumably the blank page meant either that HBS would admit the student, or that the admissions decision hadn’t been made yet.) 119 HBS applicants used the instructions.

Harvard has now summarily rejected all of them, calling their action a breach of ethics. I’m not so sure that the students’ action merits rejection from business school.

My first reaction on reading about this was surprise that HBS would make an admissions decision (as it apparently had done in many cases) and then wait for weeks before informing the applicant. Applicants rejected from HBS would surely benefit from learning that information as quickly as possible. Harvard had apparently gone to the trouble of telling ApplyYourself that some applicants were rejected, but they weren’t going to tell the applicants themselves!? It’s hard to see a legitimate reason for HBS to withhold this information from applicants who want it.

As far as I can tell, the only “harm” that resulted from the students’ actions is that some of them learned the information about their own status that HBS was, for no apparent reason, withholding from them. And the information was on the web already, with no password required (for students who had already logged on to their own accounts on the site).

I might feel differently if I knew that the applicants were aware that they were breaking the rules. But I’m not sure that an applicant, on being told that his letter was already on the web and could be accessed by constructing a particular URL, would necessarily conclude that accessing it was against the rules. And it’s hard to justify punishing somebody who caused no real harm and didn’t know that he was breaking the rules.

As the AP article suggests, this is an easy opportunity for HBS (and MIT and CMU, who did the same thing) to grandstand about business ethics, at low cost (since most of the applicants in question would have been rejected anyway). Stanford, on the other hand, is reacting by asking the applicants who viewed their Stanford letters to come forward and explain themselves. Now that’s a real ethics test.

Tagging Technology

Bruce Schneier points to a new product Smart Water. Each bottle has its own unique tag, and the water in it contains tagging elements (e.g., microdots), that will stick to an object if you spray the Smart Water on it. Then, if the item is stolen, the company says that the police can use the tags to identify the real owner.

Bruce, being a smart security analyst, immediately sees attacks on the system:

The idea is for me to paint this stuff on my valuables as proof of ownership. I think a better idea would be for me to paint it on your valuables, and then call the police.

He has a point, but this doesn’t mean that Smart Water is useless. As often happens with security products, one has to think carefully about what can be deduced from a particular fact. The fact that an item has Bruce’s tag on it doesn’t prove that the item belongs to Bruce, but it does prove that Smart Water from Bruce’s bottle has been near the object. (Actually, it doesn’t even prove that, unless the tags have certain anti-forgery properties. What exactly these properties are, and how to achieve them, is left as an exercise for the reader.)

If Bruce is your neighbor, and he has been in your house recently, then the presence of his tags on your valuables means little. On the other hand, if there is no apparent connection between you and Bruce, and an item locked in the safe in your house has his tags on it, and he was known to own an item like that which he has reported stolen, then you have some explaining to do.

This seems like a technology that will have unforeseen uses, some of which are sure to be annoying. I could put my tags on the shirt I give you for Christmas, and then check to see whether that same shirt shows up back in the store later. I could spray my tags onto my back porch, and then see whether they turn up on my neighbor’s cat. These are mildly annoying, but given enough people with enough annoying goals, I’m sure some interesting ideas will turn up.

Just wait until tags like these are RFID-enabled. Then the fun will really start.

French Researcher Faces Criminal Charges for Criticizing Antivirus Product

Guillaume Tena, a researcher also known as Guillermito, is now being tried on criminal copyright charges, and facing jail time, in France. He wrote an article analyzing an antivirus product called Viguard, and pointing out its flaws. The article is in French, and standard online translators seem to choke on it. My French is poor at best so I have only a general idea of what it says. But it sure looks like the kind of criticism a skeptical security researcher would write.

This is a standard legal-attack-on-security-researcher story. Company makes grand claims for its product; security researcher writes paper puncturing claims; company launches rhetorical and legal attack on researcher; researcher’s ideas get even wider attention but researcher himself is in danger. Everybody in the security research field knows these stories, and they do deter useful research, while further undermining researchers’ trust in unsupported vendor claims.

At least one thing is unusual about Tena’s legal case. Rather than being charged with violating some newfangled DMCA-like law, he is apparently being charged with old-fashioned copyright infringement (or the French equivalent) because his criticism incorporated some material that is supposedly derivative of the copyrighted Viguard software. Unlike some previous attacks on researchers, this one may not have been enabled by the recent expansion of copyright law. Instead, it would seem to be enabled by a combination of two factors: (1) Traditional copyright law allows such a case to be brought, even though Tena had not caused the kind of harm that copyright law is supposed to prevent; and this allowed (2) a decision by the authorities to single him out for prosecution because somebody was angry about what he wrote.

It’s bad enough that Tegam, the company that created Viguard, is going after Tena. Why is the French government participating? Here’s a hint: Tegam’s statement plays on French nationalism:

TEGAM International has for many years been the only French company to design, develop, market and provide support for antivirus and security software in France. It has chosen a global approach to security, not relying on signature updates [a method used by the most popular U.S. antivirus products].

In the software sector, everybody knows that some people would like to exert their technological domination, and as a result crush any attempt to create an alternative. As the battle goes on to try to preserve and strengthen research in France, TEGAM International defends its difference and the results of its own research.

Whom Should We Search at the Airport?

Here’s an interesting security design problem. Suppose you’re in charge of airport security. At security checkpoints, everybody gets a primary search. Some people get a more intensive secondary search as a result of the primary search, if they set off the metal detector or behave suspiciously during the primary search. In addition, you can choose some extra people who get a secondary search even if they look clean on the primary search. We’ll say these people have been “selected.”

Suppose further that you’re given a list of people who pose a heightened risk to aviation. Some people may pose such a serious threat that we won’t let them fly at all. I’m not talking about them, just about people who pose a risk that is higher than average, but still low overall. When I say these people are “high-risk” I don’t mean that the risk is high in absolute terms.

Who should be selected for secondary search? The obvious answer is to select all of the high-risk people, and some small fraction of the ordinary people. This ensures that a high-risk person can’t fly without a secondary search. And to the extent that our secondary-searching people and resources would otherwise be idle, we might as well search some ordinary people. (Searching ordinary people at random is also a useful safeguard against abusive behavior by the searchers, by ensuring that influential people are occasionally searched.)

But that might not be the best strategy. Consider the problem faced by a terrorist leader who wants to get a group of henchmen and some contraband onto a plane in order to launch an attack. If he can tell which of his henchmen are on the high-risk list, then he’ll give the contraband to a henchman who isn’t on the list. If we always select people on the list, then he can easily detect which henchmen are on the list by having the henchmen fly (without contraband) and seeing who gets selected for a secondary search. Any henchman who doesn’t get selected is not on the high-risk list; and so that is the one who will carry the contraband through security next time, for the attack.

The problem here is that our adversary can probe the system, and use the results of those probes to predict our future behavior. We can mitigate this problem by being less predictable. If we decide that people on the high-risk list should be selected usually, but not always, then we can introduce some uncertainty into the adversary’s calculation, by forcing him to worry that a henchman who wasn’t selected the first time might still be on the high-risk list.

The more we reduce the probability of searching high-risk people, the more we increase the adversary’s uncertainty, which helps us. But we don’t want to reduce that probability too far – after all, if we trick the terrorist into giving the contraband to a high-risk henchman, we still want a high probability of selecting that henchman the second time. Depending on our assumptions, we can calculate the optimal probability of secondary search for high-risk people. That probability will often be less than 100%.

But now consider the politics of the situation. Imagine what would happen if (God forbid) a successful attack occurred, and if we learned afterward that one of the attackers had carried contraband through security, and that the authorities knew he posed a hightened risk but chose not to search him due to a deliberate strategy of not always searching known high-risk people. The recriminations would be awful. Even absent an attack, a strategy of not always searching is an easy target for investigative reporters or political opponents. Even if it’s the best strategy, it’s likely to be infeasible politically.