I wrote Monday about pharming attacks, in which a villain corrupts the DNS system, which translates textual names (like “www.freedom-to-tinker.com”) into the IP addresses (like “216.157.129.231”) that are used to route traffic on the Internet. By doing this, the villain can impersonate an Internet site convincingly. Today I want to talk about how to address this problem.
The best approach would be to secure the DNS system. We know how to do this. Solutions involve having authoritative DNS servers put some kind of digital signature on the information they give out, so that a computer receiving DNS translation information can verify that the information is endorsed by an authoritative server. Such a system, if universally deployed, would put the pharmers out of business. Unfortunately, secure DNS is not widely deployed.
A partial solution, for web access at least, is to access websites via secure (HTTPS) connections. The user, on seeing a valid site, would notice the lock icon on his browser, and would know that his machine was connected to the legitimate owner of the URL that his browser was displaying. A pharmer could make accesses to “www.citibank.com” go to his evil site, but he couldn’t fool the secure-connection mechanism, so he could not make the lock icon on the user’s browser light up.
This approach works fine, as long as users notice the lock icon and refuse to deal with sites that don’t use secure connections. Will users be so vigilant? Probably not. In practice, many sites fail to use secure connections, and browsers give subtle indications of whether a connection is secure but don’t scream about insecure connections. (How could they, when insecure connections are so common?)
One drawback of relying on secure web connections is that it doesn’t protect other communication services, such as email and instant messaging. Pharmers might try to attract a user’s email or IM connections to hostile servers. We know how to secure email, by assigning encryption keys to individuals and having them encrypt and digitally sign their email. Standard email programs even know how to handle encryption and signing. But, again, few people use these facilities.
You may have noticed a common pattern here: each of these mechanisms would be effective if widely adopted, but aren’t used much in practice. In each case, we have a collective action problem. If nearly everybody adopted one of these technologies, then the holdouts would have an incentive to adopt it too; but until a critical mass of adoption is reached, there is little incentive for others to join.
Consider secure web connections. If nearly every website used secure connections, then insecure connections would be rare enough that browsers could issue prominent warnings whenever they saw an insecure connection. This would give legitimate websites a strong incentive to use secure connections, in order to avoid scaring users away. Today, insecure connections are so common that they don’t attract any suspicion. (An online banking site that used insecure connections would be odd, and might arouse suspicion from alert users; but we’re far from the point when browsers expect secure connections from everybody.)
A similar problem holds for secure email. I could digitally sign my outgoing email, but this wouldn’t do much to prevent forged messages in practice. A forged message would of course be unsigned, but unless unsigned messages were rare, nobody would be taken aback on seeing one. But if almost all messages were digitally signed, than an unsigned message would be rare enough to arouse suspicion, and might trigger a prominent warning from the user’s email program.
In all of these cases, there is a tipping point, where the authentication technology is used so widely that failing to use it attracts suspicion. Once the tipping point is reached, the remaining holdouts will switch to using the technology. Assuming we agree that it would be good to adopt one of these technologies, how can we get to the tipping point?
Let me reinforce the notion that SSL certificates are not the complete answer. In fact, they may provide a false sense of security. Just because some big company with a CA root in my browser has issued a certificate to a web site is not necessarily a good reason for me to trust that web site.
Certificates (whether for encrypting or signing or client/server authentication) are particularly good for reassuring you that you are connecting to the same site you connected to yesterday or last week, or that you are receiving email or IM from the same person who sent you one yesterday or last week. If you have reason to trust that person or site then you’re all set (where “trust” in this context could mean as little as trusting the other to “carrying on a civil discussion”).
Roland Schulz apparently isn’t aware of a couple of technical points about SSL and filtering. First, as of today, you cannot virtual host multiple SSL sites on a single IP (there is a TLS extension which will make this possible, but as of today no browser supports the RFC). And you can easily filter by website even with SSL, by simply looking at the certificate which is returned by the server as part of the handshake (which is sent in the clear).
Lastly, in the case of a machine hosting multiple sites off a single IP, entering the bare IP will cause the machine to return the “default” site, which may or may not be the one you want (at least with Apache). There is no way to get any of the alternate virtual hosts, at least not in a normal browser (one that allowed you to enter an IP into the navbar and override the Host: HTTP param would work, but I don’t know of any browser that gives you that kind of control, though you could toss one together with LWP, I suppose).
DNSSEC2!
SSL does not solve phishing/pharming problems, at least not the way it’s implemented by the vast majority of web site operators. The phishers/pharmers are starting to use SSL certs to make their sites look even more real. The stupid little lock icon is being leveraged to make the users feel more secure. SSL is useless without some sort of mutual authentication.
Have you ever tracked how many web sites your browser retrieves pages from in a single day? My *average* is 160 unique sites.
Given that the web works by allowing people to link to many sites and navigate with ease, the root problem for securing sensitive transactions is a usability issue. Users need to become accustomed to having two different security modes — one for normal browsing and one for conducting sensitive transactions.
I’ve suggested in the past that the “sensitive transactions” portion should have a high up-front setup cost and no margin for error with broken certificates and other problems. Support among the major financial institutions for a browser mode like this would go a long way to enabling it. A user would “register” a web site with the browser much like they did in the old Netscape days and SSL.
But I’m not convinced anyone really cares that much. Back in 2000, when I last looked at this problem in detail, banks and brokerages weren’t losing enough money through web pharming or phishing to make them do anything about the problem. Other risk mitigation strategies were more cost-effective. You’ll need to be certain that the this problem really is costing financial institutions money.
Just some food for thought on Phising/Pharming evil-doers, who have found
more than one way to not only take advantage of the avg_users basic
“trust factor” but have also crafted the abilities to exploit poorly
written/secured/tested Financial Websites’ online applications as well.
[netcraft.com]
“Online Banking customers are being hit hard by steadily innovating
phishing techniques which are being used by fraudsters to steal money
and identities. The Netcraft Toolbar
community has recently received two different attacks against Charter
One Bank customers. In the first incident discovered last week,
fraudsters exploited a facility which allowed them to display their own
content within the Charter One Personal Online Banking SSL site at
http://www.totallyfreebanking.com…..”
~Daemons @ Santa Fe~ Faithfully ACKnowledgeing our SYNs
SSL for email solves authentication and reliability
SSL only authenticates senders mail transfer agent and receivers mail transfer agent against each other. It does not authenticate the email content or even that the original sender is the person that is specified in the email header. That said, will you in your setup really notice when you get DNS-spoofed to use a different mailserver which also has a self-signed certificate which looks familiar to the one you created?
Filtering for children can be done by website, not by webpage;
No, it can not. Several providers cohost several sites on one IP, each of which may have material that is not suitable to children, but each of which also might contain potentially valuable material for the childs education. Because of this, you cannot filter by IP. You cannot filter by domain name either, since then the child can easily bypass the filtering by just using the IP.
the scanners can use secure connections to receive the content that they scan.
Giving the attacker just the right single point of failure to compromise to gain everything, personal information, credit card information, billing information 🙂 The point of using SSL is having end-to-end security, as soon as you get a middle man into it, you’re compromising the whole design.
But that wasn’t my point. Some sites you will want to view with SSL enabled, but most sites almost everyone probably will want to view as plain http, since it’s so much more efficient to be able to use proxies. Some more paranoid users will want to use proxy filtering to remove certain content, like browser exploit attempts, or just block annoying popups. You cannot force all users to suddenly switch to https and loose all these advantages.
Filtering what my kids see is not entirely misguided, but I tend to do it by having their computers in a public place, and blocking off a few attractive nuisances in the firewall.
Yes, this is the sensible way to do it. Filtering pages based on content (especially if this is done by stupid keyword search) is not, and I therefore describe it as misguided.
If brains were widely adopted and deployed, viruses would not be a problem.
There have been viruses before Microsoft came up with all those funny ideas of opening and trusting content delivered from anywhere. Most of those old time viruses were exploiting coding errors in one or the other software without the user having to point and click, open an attachment, whatever … You can have a brain the size of a barn and still get owned by one of those 😉
DNS-spoofing is something that is exceedingly hard to defend against, even with brains,
DNSSec has been specified and implemented for several years now, I’ve even extended a small alternative DNS server to do DNSSec about 6 years ago myself. People at that time already knew this problem would strike someday. It is a no-brainer to defend against a DNS spoofing attempt if the site uses DNSSec, and the root nameservers would finally provide the necessary signatures. It never has been a matter of brains, it always has been a matter of making a tradeoff between efficiency and security, and up to now the former has prevailed entirely. I hope that now that this is exploited in the wild the root nameserver operators will be pushed into using DNSSec, even if it means the root zone files will grow a magnitude in size (which they will, therefore upgrades to the server infrastructure will be necessary).
Not so fast! Many of the visitors to one of the sites I run (and
these are non-technical people) always look for “the lock” whenever they
go to the fulfillment page. I know this because at one point when we
were upgrading software, one of the links got messed up and pointed to
the unencrypted test page. I got many emails from users who didn’t want
to enter their credit card info (they were fine with giving the card
number over the phone–go figure) when “the lock” was absent.
I think we’re getting to Total Awareness, just with small steps.
for nearly two years now a business I administer has been signing outbound emails, hundreds of them, and only one person once called back, because “the certificate is expired” – actually its a self-signed one, and where to get the CA is stated in the signature of each mail, but that page gets no hits at all – so much to the awareness of ssl. It seems not do any damage neither, though 😉
One hurdle on the way to our tipping point that may trip us up, is the cost of encryption.
First, in CPU cycles, while it might only be a fraction more expensive to SSL a website, the fractions add up real quick when you hit a big site, like slashdot. This means big sites need more servers in the webcluster, and small servers melt faster when slashdot notices them.
Second, to prevent complaints about self-signed certificates from browsers, every SSL website would have to give money to a Certificate Authority – Verisign or similar. I’m sure they’d just $LOVE$ this proposal. At the same time, any alternative to the Certificate Authority problem requires that “naughty people” do not gain fraudulent certificates – By this, I mean let them get all the certificates for http://www.thisisaphishsite.com they want, but they must not get http://www.citibank.com
Still an easier fight than getting smarter users 🙂 (Could we call that a cost of education? trying to get people to understand they need encryption, and then, how to use it once they have it.)
I don’t agree about it being a coordination problem. Virtually every financial site already uses SSL. This is something that users can solve for themselves, right away, today. All they have to do is to train themselves to look for the lock icon (first they have to figure out where it is!) when they go to a financial site. They don’t have to wait for the whole world to change. The world has done its part, now it’s up to each individual user to change. As each person makes this change, he gains the benefit of immunity to these forms of fraud. There’s no coordination involved.
SSL for email solves authentication and reliability; every time I connect to the machine in my basement, it whines at me because my certificate is self-signed, and is not traced back to some authority. Similarly, when I do SSL to mac.com (over port 25, I think that’s a bug on their part) at TMobile Hotspot, the connection fails because Hotspot hijacks outbound 25. So SSL for email already works almost perfectly, and all that is necessary is to use an ISP or internet host that supports it.
Filtering for children can be done by website, not by webpage; the scanners can use secure connections to receive the content that they scan. Filtering what my kids see is not entirely misguided, but I tend to do it by having their computers in a public place, and blocking off a few attractive nuisances in the firewall.
If brains were widely adopted and deployed, viruses would not be a problem. I have used a virus scanner for only a few months of the last 10 years, and did not like it. I have never, not even once, been fooled by an email virus, including five years using Windows extensively. DNS-spoofing is something that is exceedingly hard to defend against, even with brains, unless we use something like SSL certificates to verify the identity of the other party.
If nearly every website used secure connections, then insecure connections would be rare enough that browsers could issue prominent warnings whenever they saw an insecure connection.
And at the same time render all the neat technologies that preserve bandwidth or improve reliability today, like proxies or load balancers, ineffective. Also, SSL prevents content monitoring, as used for example in those misguided attempts to keep children away from sites featuring XXX in their page content 😉 But one might also imagine checking for viruses or browser exploit attempts at the proxy or monitoring IDS level, both impossible with SSL connections.
As usual, security here is in a tradeoff with functionality and usability.