November 23, 2024

Lycos Attacks Alleged Spammers

Lycos Europe is distributing a screen saver that launches denial of service attacks on the websites of suspected spammers, according to a Craig Morris story at Heise Online. The screen saver sends dummy requests to the servers in order to slow them down. It even displays information to the user about the current attack target.

This is a serious lapse of judgment by Lycos. For one thing, this kind of vigilante attack erodes the line between the good guys and the bad guys. Spammers are bad because they use resources and keep people from getting to the messages they want to read. If you respond by wasting resources and keeping people from getting to the websites they want to read, it’s hard to see what separates you from the spammers.

This kind of attack can be misdirected at innocent parties. The article says that Lycos is attacking sites on the SpamCop blocklist. That doesn’t fill me with confidence – this site has been on the SpamCop blocklist at least once, despite having nothing at all to do with spam. (The cause was an erroneous complaint, coupled with a hair-trigger policy by SpamCop.)

We also know that spammers have a history of trying to frame innocent people as being sources of spam. A basic method for doing this is common enough to have a name: “Joe job”. Attacking the apparent sources of spam just makes such misdirection more effective.

And finally, there’s the question of whether this is legal. The Heise Online article reaches no conclusion about its legality in Germany, and I don’t know enough to say whether it’s legal in the U.S. Lycos argues that it’s not really a denial of service attack because they’re careful not to block access to the sites completely. But they do brag about raising the sites’ costs and degrading the experience of the sites’ users. That’s enough to make it a denial of service attack in my book.

This idea – attacking spammer sites – is one that surfaces occasionally, but usually cooler heads prevail. It’s a real surprise to see a prominent company putting it into action.

[Link via TechDirt. And did I mention that TechDirt is a great source of interesting technology news?]

UPDATE (Dec. 6): Lycos has now withdrawn this program, declaring implausibly that it has succeeded and so is no longer needed.

Keylogging is Not Wiretapping, Judge Says

A Federal judge in California recently dismissed wiretapping charges against a man who installed a “keylogger” device on the cable between a woman’s keyboard and her computer. I was planning to write a reaction to the decision, but Orin Kerr seems to have nailed it already.

This strikes me as yet another example of a legal analyst (the judge, in this case) focusing on one layer of a system and not seeing the big picture. By fixating on the fact that the interception happened at a place not directly connected to the Internet, the judge lost sight of the fact that many of the keystrokes being intercepted were being transmitted over the Net.

CallerID and Bad Authentication

A new web service allows anybody to make phone calls with forged CallerID (for a fee), according to a Kevin Poulsen story at SecurityFocus. (Another such service had been open briefly a few months ago.) This isn’t surprising, given the known insecurity of the CallerID system, which trusts the system where a call originates to provide accurate information about the calling number.

This is more than just a prankster’s delight, since some technologies are designed to use CallerID as if it were a secure identifier of the calling number. Poulsen reports, for instance, that T-Mobile uses CallerID to authenticate its customers’ access to their voicemail. If I can call the T-Mobile voicemail system, while sending CallerID information indicating that the call is coming from your phone, then I can access your voicemail box.

Needless to say, it’s a bad idea to use an insecure identifier to authenticate accesses to any service. Still, this mistake is often made.

A common example of the same mistake is to use IP addresses (the numeric addresses that designate “places” on the Internet) to authenticate users of an Internet service. For example, if Princeton University subscribes to some online database, the database service may allow access from any of the IP addressess belonging to Princeton. This is a bad idea, since IP addresses can sometimes be spoofed and various legitimate services can make an access seem to come from one address when it’s really coming from another.

If I were to run a web proxy within the Princeton network, then anybody accessing the web through my proxy might (depending on the circumstances) appear to be using a Princeton IP address. My web proxy might therefore allow anybody on the web to access the proprietary database. Some users might deliberately use my proxy to gain unauthorized access, and some users might be using the proxy for other, legitimate reasons and be surprised to have open access to the database. In either case, the access would be enabled by the database company’s decision to rely on IP addresses to control access.

In practice, people who design web proxies and similar services often find themselves jumping through hoops to try to prevent this kind of problem, even though it’s not their fault. One isn’t supposed to rely on IP addresses for authentication, but many people do. The result is that developers of new services may find themselves either (a) inadvertently enabling unauthorized access to other services, or (b) spending extra time and effort to shore up the insecure systems of others. Some of my colleagues who developed CoDeeN, a cool distributed web proxy system, found themselves wrestling with this problem and ultimately chose to add complexity to their design to protect some IP-address-based authentication systems. (They wrote an interesting paper about all of the “bad traffic” that showed up when they set up CoDeeN.)

It will be interesting to see how the CallerID story develops. My guess is that people will stop relying on the accuracy of CallerID, as spoofing becomes more widespread.

What's the Cybersecurity Czar's Job?

The sudden resignation of Amit Yoran, the Department of Homeland Security’s “Cybersecurity Czar”, reportedly due to frustration at being bureaucratically marginalized, has led to calls for upgrading of the position, from the third- or fourth-level administrator billet that Yoran held, to a place of real authority in the government. If you’re going to call someone a czar you at least ought to give him some power.

But while we consider whether the position should be upgraded, we should also ask what the cybersecurity czar should be doing in the first place.

One uncontroversial aspect of the job is to oversee the security of the government’s own computer systems. Doing this will require the ability to knock heads, because departments and offices won’t want to change their practices and won’t want to spend their budgets on hiring and retaining top quality system administrators. That’s one good argument for upgrading the czar’s position, perhaps affiliating it with a government-wide Chief Information Officer (CIO) function.

A harder question is what the government or its czar can do about private-sector insecurity. The bully pulpit is fine but it only goes so far. What, if anything, should the government actually do to improve private-sector security?

Braden Cox at Technology Liberation Front argues that almost any government action will do more harm than good.

In an article I wrote last year when Yoran was first appointed, I argued that the federal government has a role to play in cybersecurity, but that it should not be in the business of regulating private sector security. Mandated security audits, stringent liability rules, or minimum standards would not necessarily make software and networks more secure than would a more market-based approach, though it would surely help employ more security consultants and increase the bureaucracy and costs for industry.

Certainly, most of the things the government can do would be harmful. But I don’t see the evidence that the market is solving this problem. Despite the announcements that Microsoft and others are spending more on security, I see little if any actual improvement in security.

There’s also decent evidence of a market failure in cybersecurity. Suppose Alice buys her software from Max, and Max can provide different levels of security for different prices. If Alice’s machine is compromised, she suffers some level of harm, which she will take into account in negotiating with Max. But a breakin to Alice’s machine will turn that machine into a platform for attacking others. Alice has no incentive to address this harm to others, so she will buy less than a socially optimal level of security. This is not just a theoretical possibility – huge networks of compromised machines do exist and do sometimes cause serious trouble.

Of course, the existence of a problem does not automatically imply that government action is required. Is there anything productive the government can do to address this market failure?

I can see two possibilities. The first approach is for the government to use its market power, as a buyer of technology, to try to nudge the market in the right direction. Essentially, the government would pay for compromise-resistance, beyond its market incentive to do so, in order to bolster the market for more compromise-resistant software. For example, it might, in deciding what to buy, try to take into account the full social cost of potential breakins to its computers. Exactly how to make this happen, within a budget-conscious bureaucracy, is a challenge that I can’t hope to address here.

The second approach government might take is to impose some form of liability, on somebody, for the types of security breaches associated with this market failure. Liability could be placed on the user (Alice, in our example above) or on the technology vendor. There has been lots of talk about the possibility of liability rules, but no clear picture has emerged. I haven’t studied the issue enough to have a reliable opinion on whether liability changes are a good idea, but I do know that the idea should not be dismissed out of hand.

What’s clear, I think, is that none of these possibilities require a “czar” position of the sort that Yoran held. Steps to improve cybersecurity inside the government need muscle from a CIO type. Changes to liability rules should be studied, but if they are adopted they won’t require government staff to administer them. We don’t need a czar to oversee the private sector.

A Roadmap for Forgers

In the recent hooha about CBS and the forged National Guard memos, one important issue has somehow been overlooked – the impact of the memo discussion on future forgery. There can be no doubt that all the talk about proportional typefaces, superscripts, and kerning will prove instructive to would-be amateur forgers, who will know not to repeat the mistakes of the CBS memos’ forger. Who knows, some amateur forgers may even figure out that if you want a document to look like it came from a 1970s Selectric typewriter, you should type it on a 1970s Selectric typewriter. The discussion, in other words, provides a kind of roadmap for would-be forgers.

This kind of tradeoff, between open discussion and future security worries, is common with information security issues – and this is a infosecurity issue, since it has to do with the authenticity of records. Any discussion of the pros and cons of a particular security system or artifact will inevitably reveal information useful to some hypothetical bad guy.

Nobody would dream of silencing the CBS memos’ critics because of this; and CBS would have been a laughingstock had it tried to shut down the discussion by asserting future forgery fears. But in more traditional infosecurity applications, one hears such arguments all the time, especially from the companies that, like CBS, face embarrassment if the facts are disclosed.

What’s true with CBS is true elsewhere in the security world. Disclosure teaches the public the truth about the situation at hand (in this case the memos), a benefit that shouldn’t be minimized. Even more important, disclosure deters future sloppiness – you can bet that CBS and others will be much more careful in the future. (You might think that the industry should police itself so that such deterrents aren’t necessary; but experience teaches otherwise.)

My sense is that it’s only the remote and mysterious nature, for most people, of cybersecurity that allows the anti-disclosure arguments to get traction. If people thought about most cybersecurity problems in the same way they think about the CBS memos, the cybersecurity disclosure argument would be much healthier.