August 28, 2016

avatar

Computer Science Professors' Brief in Grokster

Today, seventeen computer science professors (including me) are filing an amicus brief with the Supreme Court in the Grokster case. Here is the summary of our argument, quoted from the brief:

Amici write to call to the Court’s attention several computer science issues raised by Petitioners [i.e., the movie and music companies] and amici who filed concurrent with Petitioners, and to correct certain of their technical assertions. First, the United States’ description of the Internet’s design is wrong. P2P networks are not new developments in network design, but rather the design on which the Internet itself is based. Second, a P2P network design, where the work is done by the end user’s machine, is preferable to a design which forces work (such as filtering) to be done within the network, because a P2P design can be robust and efficient. Third, because of the difficulty in designing distributed networks, advances in P2P network design – including BitTorrent and Respondents’ [i.e., Grokster’s and Streamcast’s] software – are crucial to developing the next generation of P2P networks, such as the NSF-funded IRIS Project. Fourth, Petitioners’ assertion that filtering software will work fails to consider that users cannot be forced to install the filter, filtering software is unproven or that users will find other ways to defeat the filter. Finally, while Petitioners state that infringers’ anonymity makes legal action difficult, the truth is that Petitioners can obtain IP addresses easily and have filed lawsuits against more than 8,400 alleged infringers. Because Petitioners seek a remedy that will hobble advances in technology, while they have other means to obtain relief for infringement, amici ask the Court to affirm the judgment below.

The seventeen computer science professors are Harold Abelson (MIT), Thomas Anderson (U. Washington), Andrew W. Appel (Princeton), Steven M. Bellovin (Columbia), Dan Boneh (Stanford), David Clark (MIT), David J. Farber (CMU), Joan Feigenbaum (Yale), Edward W. Felten (Princeton), Robert Harper (CMU), M. Frans Kaashoek (MIT), Brian Kernighan (Princeton), Jennifer Rexford (Princeton), John C. Reynolds (CMU), Aviel D. Rubin (Johns Hopkins), Eugene H. Spafford (Purdue), and David S. Touretzky (CMU).

Thanks to our counsel, Jim Tyre and Vicky Hall, for their work in turning a set of ideas and chunks of rough text into a coherent brief.

Comments

  1. avatar Cypherpunk says:

    As an academic, don’t you feel an obligation to use your specialized knowledge and skills for the benefit of society? To pursue the truth, the full truth, no matter where it leads and who it offends? To present a full analysis of an issue, with all its complexities, looking at all sides critically? Isn’t that what you teach your students?

    If so, then why do you only oppose one side’s arguments in this case? Can you literally find not one thing negative to say about the other side? Is every one of their arguments above criticism?

    Maybe I am being unreasonable and unrealistic in imagining a standard within the academy of free inquiry and unbiased pursuit of the truth. It could be that academics take sides based on their own perceived self-interest, and intentionally promote that side of an issue which serves their own needs best. Or perhaps, more generously, they decide on behalf of society which policies are superior, and then become tireless advocates for those issues, bending the truth and manipulating arguments as necessary in order to bring about what they perceive as the social good.

    I’d be curious to know how you perceive the relation between academia and society on political issues like the one before us.

  2. avatar Roland Schulz says:

    Why should anyone beside the music industry and those that generate revenue from ‘donations’ by said industry be on the side that tries to defeat a new technology just to cling to an outlived business model? The one side most obviously bending the truth and manipulating arguments here is the content industry, and it _is_ a service on behalf of society to correct those misrepresentations.

    The music industries may or may not have a point in taking Grokster to court. But doing this with blatantly false accusations and trying to seed misconceptions into the judges minds, and those of legislative proponents who will surely one day or another come back and follow this case seeking guidance in the making of future laws, is something that academia has a duty to discern and bring to public view. Would that this also was the case with US foreign policy :P

  3. Cypherpunk forgets that this is an amicus brief in a court case, not a lecture at a university. In a court case, I think it would be counterproductive to present arguments against the side you are trying to support; let the opposition do that. Sure, “within the academy” all sides should be presented and examined fully; this isn’t “within the academy”.

  4. avatar Anony Mouse Cow Herd says:

    As an academic, don’t you feel an obligation to use your specialized knowledge and skills for the benefit of society? To pursue the truth, the full truth, no matter where it leads and who it offends?

    And that is exactly what this document is about. Basicaly, they wish to inform the courts that Peer to Peer (P2P) is NOT something new, but is the BASIC building block of the Internet. (http://www.ietf.org/rfc/rfc0001.txt) Any legislation on this could be as disasterous on current and future Internet Technology beyond pireted movies and songs. This brief is trying to point out that banning/filtering/legislating P2P data on the Internet would have as disasterous effects as banning/filtering/legislating Analog (aka Audio) data on the telephone network. Are there laws to prosicute IP theft? Yes. Are there laws specificaly for IP theft via telephones? No. Do we need one specificaly for IP theft via Internet?

    Are the professors saying to abolish IP law? No. Is IP theft a problem? Opinion with held. Will ligislating P2P networks do more damage than good. More than likely.

    Putting a Policeman at every street intersection doing drug tests, vehicle searches, and driver liscence checks would probably reduce the number of crimes in which an automobile is involved, but would the outcome be more benificial than the resources expended, congestion incurred, and all around crippeling of the transport service itself?

  5. Cypherpunk,

    We were operating under three tight constraints:

    (1) Anything in the brief had to be agreeable to all seventeen of us.

    (2) Our word count was limited (and we used it all) so we couldn’t say more. There wasn’t space for the sort of on-the-one-hand-this, on-the-other-hand-that, therefore-we-conclude writing mode that one sees in less length-constrained academic documents. We had to cut to the chase.

    (3) Most importantly, we could only repond to the briefs that were filed before our deadline (and no later deadline was available to us). For example, responding to the contents of Grokster’s brief would have been causally impossible.

  6. Well said!

  7. avatar Craig Pennington says:

    cypherpunk writes:

    As an academic, don’t you feel an obligation to use your specialized knowledge and skills for the benefit of society? To pursue the truth, the full truth, no matter where it leads and who it offends?

    I would imagine that being an academic, one would feel a duty to oppose unjustified limits on research. By my reading, that is the position that this brief takes.

    To present a full analysis of an issue, with all its complexities, looking at all sides critically?

    An amicus brief is a structured document, limited in size, which advocates a specific cause to the court. It is not a paper for an academic journal.Cheers,Craig

  8. for me, it fits a scientist very well, to take a side – this is what drives progress. as much as science is about methods, it should not prevent people to get to a conclusion. I welcome this letter: the US are too important, even to non-americans like me, to become a prisoner of the entertainment industry.

  9. Second, a P2P network design, where the work is done by the end user’s machine, is preferable to a design which forces work (such as filtering) to be done within the network, because a P2P design can be robust and efficient.

    I understand why this is the consensus view (I might, say, “dogma”) of the computer networking community. But what’s striking about the list of signatories is that it includes roughly as many prominent computer security researchers as networking researchers. I say that this is striking because it should be obvious by now that the P2P design of the Internet, however attractive its scalability and extensibility properties, has resulted in a security and reliability nightmare of the first order.

    These days, both the security and networking communities are devoting a very large fraction of their energies to the task of applying “duct tape and baling wire” to the Internet, in a vain attempt to secure what amounts to an inherently unsecurable P2P-style network. Putting aside opinions about intellectual property and research freedom, surely it would have been more accurate for the signatories to say something like, “P2P networks, while inherently doomed to be less secure and reliable than networks with more structured architectures, are nevertheless an attractive choice in many settings–including the vast majority of current Internet applications–where scalability, flexibility and extensibility are more important than security or reliability.”

    Granted, this would have been a less politically/legally effective phrasing, but shouldn’t computer scientists–and especially computer security researchers, for whom public understanding of computer security ought to be an important goal in itself–value their scientific goals over their legal and political ones, at least when speaking professionally?

  10. Dan,

    Please bear in mind that the sentence you quote comes from a section of the brief that plays roughly the same role that the abstract plays in a technical paper. It tries to boil down an entire section of the brief into a single sentence, and so necessarily leaves out a lot. I’m curious what you think of the corresponding section of the brief (Section II, starting on page 6), which makes the argument at much greater length.

  11. Ed–thanks for the pointer. Yes, most of section II of the full brief is reasonably circumspect in recommending the end-to-end principle, simply suggesting (quite correctly) that following it may yield certain technical benefits in some circumstances. But a sentence like, “[o]n the contrary, certain functionality (such as using filters) should not be done at the network level” might be construed as a tad overly hostile to, say, the firewall community. And the flat assertion that “[t]he best way to provide appropriate error correction is to use an end-to-end mechanism” is, I think, a serious overgeneralization, for reasons that I imagine you can come up with yourself.

    Overall, the brief seems to oscillate between “P2P shouldn’t be outlawed, because it might be useful for some things” and “P2P will save the world, if only it’s not strangled at birth”. I guess I can give the security researchers the benefit of the doubt, and assume that they were only responsible for the former bits, and that the latter parts were entirely the responsibility of those P2P-crazy systems and networking folks….

  12. End-to-end is less secure how, exactly? If the smarts is in a large number of small targets, each potentially with its own idiosyncratic configuration, it is very difficult to do serious damage, save through compromising a small fraction of machines and using them to flood the network. There’s some potential for denial of service, but low risk to a typical end-user machine of a serious privacy breach or local data loss attack. (Well, ideally. The reality with large numbers of identically-configured, uniformly susceptible Windows boxen is a plague of spyware and viruses, but that cannot be a sustainable situation.)

    Also, each end user gets to make their own decisions about how important their machine’s security is to them. The user is ultimately in control; the firewall is right by their feet where they can turn it off or adjust it if it’s blocking desired traffic and blackhole the IP of that annoying person on MSN Messenger without imposing that on everyone else in some large area.

    Centralizing would be worse — a smaller number of higher profile targets to take down to take most network functionality down completely; much bigger risks of mass privacy invasions or data loss (just look at the Choicepoint scandal; now imagine that it was virtually mandated that this information be gathered in one place for even more people, where it could all be compromised in one stroke) and general “too many eggs in not enough baskets” risk-concentration; people would not have control of their own network participation to anything like the extent they do now. Supposing they still could locally filter traffic they personally don’t desire, it would remain true that central authorities might filter traffic they did desire and they wouldn’t have a way around that. Adults of sound mind and judgment should not be subjected to central control of their access to information or resources. I don’t want anyone else deciding what messages or traffic I should get and what I shouldn’t; I’ll make up my own mind even if the price of that is spam. The alternative is much worse: tyranny. First it will just be viruses and spam. Then it will be infringing traffic, which will just encrypt and go underground anyway. Then it will be borderline stuff like the grey album, and hate speech; then pro-marijuana-legalization stuff, porn, and other stuff that’s distasteful to some but obviously desired by others — busybodies will do everything they can once a central point of control exists to impose their prudish agenda on everyone else. And then politically undesired viewpoints in general. And before long the people at the filtering points are the de-facto rulers of a de-facto dictatorship. The slippery slope that starts with centralizing IT and the consequent potential for mass IT censorship on a low budget ends with the clomping sound of a few million marching jackboots. And that is one hell of a big security risk to take. We must not follow China’s lead and establish central routers and official censorship on the ‘net to satisfy the baying of the militant faction of one ideology.

  13. By the way, there’s an article on the first page that is missing its “post a comment” stuff, though it has a new comment I would like to respond to. Articles that are on the first page are supposed to accept new comments, and always have done in the past here. A malfunction? Please look into it.

  14. Neo–First of all, it’s important to distinguish between host security and network security. Hosts are secure if they perform their designated functions securely, while networks are secure if they perform their designated functions securely. Insecure hosts are vulnerable on just about any usable network, and an insecure network is vulnerable to attack even if all the hosts on it are completely secure.

    The Internet, for example, does nothing except transport packets–in keeping with the end-to-end principle. However, because of its insecure P2P nature, its performance of even this one simple function is both highly unreliable and highly insecure, compared with more managed networks such as the telephone network. (The Internet has other advantages, of course–I’m simply pointing out its security and reliability disadvantages.) Firewalls are far from the panacaea you make them out to be–they are in fact next to useless against well-executed DDoS attacks (or even individual badly misconfigured routers).

    Note that this inherent relative lack of security and reliability is completely independent of the security of the hosts on the network. If all hosts on the Internet were to become perfectly secure against attack tomorrow, DDoS attacks would still be easy. For example, attackers might release “DDoS@Home”, a safely sandboxed application which, in return for a user benefit–perhaps the showing of pleasing images of some sort–operated as part of the attacker’s DDoS attack “botnet”. Users would be happily and safely entertained, and their spare bandwidth and cycles would in return be at the service of the DDoS attacker.

    In networks with built-in accountability mechanisms–such as the telephone network–such attacks are much easier to prevent, or at least to punish afterwards. The Internet’s “dumb”, P2P architecture, however, generally precludes such mechanisms.

  15. avatar Craig Pennington says:

    Firewalls are far from the panacaea you make them out to be–they are in fact next to useless against well-executed DDoS attacks (or even individual badly misconfigured routers).

    Note that this inherent relative lack of security and reliability is completely independent of the security of the hosts on the network.

    I don’t disagree with you in principle, but you are overstating the problem with IP network security. Executing a DDOS requires a large number of cooperating distributed hosts on the network. In practice, this occurs almost exclusively because of the existence of insecure hosts on the network. I would say that the practical lack of security and reliability of the network (which actually seems sufficiently reliable for anything short of 911 service to me — and both my office mates have dumped their land lines for internet telephony) is almost entirely due to the existence insecure hosts on the network. IP network security could be improved, but it’s not that bad now.Cheers,Craig

  16. Craig–As I mentioned before, it’s obvious that the Internet is secure and reliable enough for a huge range of applications. (Whether telephony is one of them remains to be seen, I’d argue–I probably know as many disillusioned VoIP skeptics as converts.) My point is simply that the reason is not that P2P networks like the Internet are pretty good at providing security and reliability, but rather that a great many applications simply don’t require much security or reliability at all in practice, and are therefore well-placed to take advantage of the Internet’s P2P-related advantages (scalability, flexibility, and of course low cost), while surviving its significant security and reliability disadvantages compared to other network architectures.

    It’s also true that DDoS attacks today generally use compromised hosts. My point was that they needn’t necessarily do so. Obviously a world of bullet-proof hosts is a long way off, so this may seem like an academic point, but in such a world, I don’t believe that DDoS attacks would be significantly more difficult than they are today, for the reasons I explained. DDoS vulnerability is fundamentally a problem with the network, not with the state of host security.

  17. “In networks with built-in accountability mechanisms–such as the telephone network–such attacks are much easier to prevent, or at least to punish afterwards. The Internet’s “dumb”, P2P architecture, however, generally precludes such mechanisms.”

    You don’t quite get it, do you. We don’t WANT a network with “built-in accountability mechanisms”. We don’t want “big brother inside”. We want a free network and secure hosts, not a secure network and users in chains beholden to special interests and whoever has a lot of power/money.

    Given secure hosts the only remaining danger is a DDoS with voluntary participants, and since denial of service attacks are already illegal we don’t need any additional laws, any infrastructure changes, or any goddam broadcast flags *spit* to deal with them. A sustained DDoS can be tracked back to the originating machines, and the machines’ owners can be told to secure their boxes and make sure the flooding stops. Any that fail to comply within a certain time can be dropped from the network and fined.

    All we need for an ideal internet is better host diversity and security, and traffic cops. The LAST thing we need is wanna-be thought police having the means to actually police the internet for traffic they decide is undesirable on everyone else’s behalf. Let individual host owners decide what traffic (e.g. spam, DDoS packets) is undesirable and what traffic (possibly including things busybodies wouldn’t want them to have such as porn or far-left anarchist fiction) is desirable.

  18. avatar Craig Pennington says:

    I don’t believe that DDoS attacks would be significantly more difficult than they are today, for the reasons I explained.

    I do think that they would be significantly more difficult, but not so much so that they would be impractical. But I’m not going to belabour that point. It is interesting to note a recent project that could bring this shortcoming to the PSTN, though — p2p internet telephony to PSTN bridges. Was called bellster, now called fwdout. In a world where there were a sufficient number of subscribers to this network, someone could reek all sorts of PSTN havoc as anonymously as your average DDOS script kiddie does on the internet today.
    Cheers,Craig

  19. avatar Anonymous says:

    ” Let individual host owners decide what traffic (e.g. spam, DDoS packets) is undesirable and what traffic (possibly including things busybodies wouldn’t want them to have such as porn or far-left anarchist fiction) is desirable. ”

    The problem wiith this statement is that it doesn’t really hold up. I’ve already decided that I don’t want DDoS packets or Spam traffic on my network. I must be missing a button to click, or a form to fill out, because i’m still getting both.

    I agree with you, in that we don’t want a big-brother type of network, but the current choice is severly lacking. Due to it’s design, the very functioning of the internet at large relies on the good-will of all participants. There are select few mechanisms to find and hold accountable people who abuse the network. Sadly, I have no solution, but if you are proposing we keep the internet the way it is, I feel it will continue to erode even more.

  20. avatar Felix Deutsch says:

    As an academic, don’t you feel an obligation to use your specialized knowledge and skills for the benefit of society? To pursue the truth, the full truth, no matter where it leads and who it offends? To present a full analysis of an issue, with all its complexities, looking at all sides critically?

    I’d agree, but since this sounds like the typical boilerplate of the “give the flat-earth society an equal hearing”, “Evolution is JUST a theory”, academics-have-a-liberal-bias cranks, I’ll just repeat what other have put more eloquently:
    You don’t have to give equal time to the opposite when you’re making a statement of fact, and even less so when providing an amicus brief for one side of a case.

  21. Academics try to look at both sides. Lawyers let the other team’s lawyers look at the other side. That’s how the justice system works, and it’s entirely appropriate to act that way. There is no point in giving ammunition to the other side, when the other side will absolutely *not* present their arguments with the same high-minded fairness.

    Privately, I’m sure all of these researchers have done their best to evaluate all sides of this issue. Perhaps some of them have written papers about it. Now, they’ve reached their conclusions, and it’s best to present them as strongly as possible. Do anything else, and opposing lawyers will be snickering into their hands.

  22. avatar Anonymous says:

    Dan Simon,

    I believe your understanding of how network security works is flawed. A secure application, such as a web browser accessing a SSL-secured site, works by exchanging encrypted information. Encryption algorithms *assume* that the underlying transport is completely insecure, and work around that – that is the core problem of encryption. You could post the messages that your web browser is sending in the local newspaper and have the server it’s talking to post its responses there, and nobody will be able to decode them. Thus, security is in no way compromised by having a decentralized transport network. The other security-related isues you mention are due to bugs or bad design in invididual applications, not the Internet as a whole. If a database application constantly reads and executes commands from a specific port, without authentication, and an attacker sends his own command there, of course it will execute it. But it doesn’t matter what the transport network inbetween was – the same problem would happen elsewhere. Similarly, viruses and “bot nets” are a problem with uneducated users and buggy email clients, not the underlying transport network that is the Internet. Finally, the DDOS problem you mention is once that would be inherently difficult to solve on any network. DDOS is the equivalent of convincing 10000 people to go into the local supermarket at once. It will be swamped no matter what you do. There are actually many ways to detect and curb DDOS attacks though (some of which are based on P2P systems! see http://www.intel.com/research/network/berkeley_collab.htm#public). Just as a simple example, how many DDOS attacks have affected you in the past month? Do you think people were simply not trying, if the Internet is as insecure and unreliable as you make it out?

    The truth of the matter is that the Internet has scaled beyond our wildest hopes and served brand-new applications without problems. Who would have thought when the Internet was created that with little change it would support efficiently transferring files at rates of 1000 Kb/s (this is faster than many of the earliest disk drives or floppy drives), allow one to look up news or do research in an encyclopedia with response times less than 1 second, and allow real-time voice or video conversations at higher quality than telephones, all for about the same cost as telephone or cable service? Who would have thought that there would be thousands of websites like this one where people go every day to comment on issues? Yet this simple “unreliable” distributed system supports all this without any major problems.

  23. Security is best done end to end, as are transport protocols (eg TCP, UDP), for the same reason – “if you want something done properly, you need to do it yourself”.

    If a host wants to be assured of its security, then it needs to be execute those security functions itself, rather than relying on any intermediary devices. The network can’t provide as good a security as hosts needs, so the network shouldn’t try.

    For example, we all know that a firewall at the Internet edge provides “security” for the host. Except that it only privides security from Internet based attacks. Insert a wireless AP between the firewall and the host, and the host is now exposed to a different set of attackers – wardrivers. Further more, it has no protection from those attackers at all, because it put all its faith in the Internet firewall. It trusted something else to do a job it was most qualified to perform, and had the most interest in performing.

    I first learned of this idea from the Steve Bellovin paper, “Distributed Firewalls”. Steve is one of the Professors who contributed. The more I’ve thought about it, the more it makes sense.

  24. The above two comments, I think, nicely illustrate the damage that dogmatic acceptance of the “end-to-end principle” in the computer networking community has done to its thinking about network security.

    A very solid argument can be made that data privacy is best handled by end-to-end encryption. (Of course, an even more solid argument can be made that data privacy is best ignored in most circumstances. In practice, the vast majority of traffic on public networks is transmitted in the clear, with no adverse consequences.)

    On the other hand, it’s not obvious at all that authentication is best handled end-to-end. The phone network, for example, implicitly implements a kind of two-way authentication, in the sense that both parties trust the network to identify, and connect to, the other party’s correct telephone number. Innumerable telephone users find this service invaluable, and rely on it all the time.

    Now, this service does not exist on the Internet–senders can identify themselves (or misidentify themselves) as they please, and the network simply makes its best effort to deliver packets to the requested destination, with no guarantee that it will succeed, or provide correct sender information. This is a huge network security headache, one which greatly facilitates DDoS attacks.

    IPSec attempted to solve this problem in a manner compatible with the “end-to-end” principle. Deploying it turned out to be an even bigger headache than the original one, and it is now used pretty much exclusively within private networks and VPNs. The rather obvious alternative of building some kind of authentication into the network, as in the case of the phone network–leveraging the network’s readily available information about the endpoints it’s connecting–is, as far as I know, not even under discussion, given its obvious violation of the “end-to-end” principle.

    (In fact, I’m given to understand that the telephone network’s implementation of caller ID is flawed, allowing caller ID spoofing. If so, then that, too, as in the case of the Internet, will be a huge network security headache–if it’s not already. But the fact that so many people have come to rely on the phone network’s flawed implementation suggests that the implementation should be fixed–not that it should be dismantled in favor of end-to-end solutions.)

    As for firewalls, yes, Mark, they’re not the perfect solution. (Fixing the Internet itself would certainly be preferable.) But I doubt there are many network security administrators out there that have any intention of getting rid of their firewalls any time soon. Given that the Internet itself lacks even the most basic means for controlling incoming traffic, firewalls provide an essential central point of policy-based traffic control for a private network. If that network contains hundreds or thousands of hosts, then trying to manage their security remotely is a nightmare (viz., the “inserted AP” problem you mentioned–not to mention the difficulty of remotely updating the network security policy for a host that’s being DDoS-attacked). A firewall is simply a much, much easier way to enforce a uniform policy over the network, even though–indeed, precisely because–it violates the “end-to-end” principle.

  25. avatar Anonymous says:

    “On the other hand, it’s not obvious at all that authentication is best handled end-to-end. The phone network, for example, implicitly implements a kind of two-way authentication, in the sense that both parties trust the network to identify, and connect to, the other party’s correct telephone number. Innumerable telephone users find this service invaluable, and rely on it all the time.”

    This can be done on the Internet already. Example: if I sign in to MSN Messanger (instant messaging program), and so does a friend, then I know it’s him and he knows it’s me because there’s a Microsoft server sitting inbetween that plays the same role as the telephone switchboard. In other words, the Internet is only used for transport – the application you build on top of it does whatever else you need. Instead of phone lines, TCP connections carry the streams of data. In your example, suppose someone cut the phone line that goes into your house and plugged it into their phone; how would the switchboard know it’s not you? How is this secure? (Of course web applications can use cryptography for authentication).

  26. If I thought that someone might have cut the phone line to my home and plugged it into their phone–and the conversation were sensitive enough that I were actually concerned about it–I would no doubt use end-to-end cryptography to secure my conversation. Such encrypting telephones are, in fact, commercially available. How many times have you used one? Personally, I can count the number on no hands.

    Yes, network-provided authentication would involve a tradeoff between the cost of implementing it and the quality provided. If I needed more secure authentication, and were willing to pay the cost, I’d use my own end-to-end solution. If I didn’t need even the level of authentication provided by the network, and couldn’t afford the extra cost imposed in return for that service, I’d seek out a cheaper network–such as the current Internet–that doesn’t offer it.

    But the “end-to-end” principle, translated into economic terms, is in fact a bald assertion that the “sweet spot” for any particular such cost-quality tradeoff is always the lowest cost, lowest-quality option, with the user bearing the burden of implementing (and paying for) any extra features. In many cases, that’s clearly correct–as the success of the Internet has proven. But it’s hardly an iron-clad law.

    An obvious example: a single very simple, inexpensive violation of the end-to-end principle–universal deployment of ingress/egress filtering by ISPs–would allow the Internet to intrinsically provide something like telephone network-level authentication between IP addresses. Such a measure wouldn’t solve the DDoS problem completely, but it would surely help–in addition to providing many other benefits. Are you really so enamored of “end-to-end” dogma that you’d rather this highly attractive bit of functionality–the validation of IP addresses–be provided strictly by end hosts, using cryptography?

    And if not, why is it an exception–and what other exceptions might you be willing to consider?

  27. avatar Ben Newman says:

    Dan Simon wrote:

    An obvious example: a single very simple, inexpensive violation of the end-to-end principle–universal deployment of ingress/egress filtering by ISPs–would allow the Internet to intrinsically provide something like telephone network-level authentication between IP addresses… Are you really so enamored of “end-to-end” dogma that you’d rather this highly attractive bit of functionality–the validation of IP addresses–be provided strictly by end hosts, using cryptography?

    Short answer: Yes.

    Long answer: ingress/egress filtering by ISPs

    We don’t, in a pinch, trust our ISPs any more than we trust anyone else. The end-to-end principle means that the technical correctness of the network doesn’t depend on the trustworthiness of anything — not any one technology, individual server or router, and not any one company or institution. A centrally controlled network, or even a centrally maintained authentication infrastructure, requires a high level of trust in whoever controls it.

    Somewhat off-topic, but an illustrative example of this is internet identity services. There are a number of companies offering accounts that you can use to identify and authenticate yourself to other people. That’s nice, but why should I trust/pay a company to maintain the association between my identity/username/etc. and the secret I use to secure it? What if they get bought out? Hacked? The only true security is a public/private keypair where I know the private key and nobody else does — this provides security that is superior in principle to any system which requires universal trust in some central authority.

  28. Dan, it seems to me your dogmattically sticking to the current, conventional methods of security, rather than considering their limitations.

    I first implemented a firewall back in 1996. I thought they were perfect at the time.

    The first time I implemented NAT, at around the same time period, I got burned by it not supporting NetBIOS, which is what the customer wanted to push through the NAT box. That was directly because NAT breaks end-to-end.

    I’m not dogmatic about the end-to-end principle. I have practical bad experiences from when I didn’t follow it, or rather, didn’t know about it.

    A rule I learned a while ago was try to be efficient and effective. Most people confuse what the two words mean, thinking they are the same thing. They’re not. Efficient means “doing things right”, effective means “doing the right things”.

    Current firewall methods aren’t that effective. I gave a wireless AP example. Maybe what is good / worse about this example is that wireless APs are getting so cheap and available that a end user within the organisation could bring one in, and discretely install it in the network, because they want to be able to read emails in the cafe below the building. Central IT wouldn’t know about it at all. Central IT would have a false sense of security, because they could point to the Internet firewall and say “we’re secure because we have a (Internet) firewall”. A false sense of security is worse than no security at all; with no security, at least you know you don’t have any.

    I probably should have been a bit more explicit about describing the alternative model. It isn’t to get rid of firewalls – it is to put them on the end host itself, and remove them from the network. Simply, only the host can do a proper job of protecting itself from network traffic from any direction, so that is where the protection should be implemented.

    Note that if IT had deployed firewall security to the hosts themselves, then they can be confident that no matter how the security of the network is breached by, for example, the insertion wireless AP, they know their hosts are protected.

    Why is Windows XP, OS X and Linux distributions coming with firewalls out of the box ? It is because the OS vendors can’t assume that the user of the OS has a firewall “upstream” in the network, or that that upstream firewall will provide the sort of protection the end user might need or assumes. Once you can’t assume something exists, there is no point having it exist.

    People might argue that firewall policy distribution is not scalable, and a centralised firewall is simpler. Technically that is true – it is the classic one verses many argument. That doesn’t mean it isn’t achievable though. The problem is no different and no harder than the distribution of file system or application policy to desktops. Microsoft SMS performs that function, and there is no reason why it couldn’t be enhanced to also deploy firewall policy to the users. Another of Steve Bellovin’s papers discusses a system they built that distributes firewall policy using IPsec. Solutions exist, they just need to be further developed and deployed.

    I haven’t fully been telling the truth. Central IT couldn’t be be confident that the cleartext traffic wasn’t being being sniffed, even if they did implement firewall security on the hosts. So, they should also implement end-to-end encryption between the hosts. This technology also exists – it is called opportunistic IPsec.

  29. Dan, you’re right, caller ID can easily be spoofed by anyone that has programming access to the organisation’s PABX. In Kevin Mitnick’s book, “The Art of Deception”, he describes how it works. That doesn’t sound like a lot of people, until you consider how many organisations have PABXs.

    According to Kevin, the telephone network blindly trusts an organisation’s PABX for the Caller ID information. It doesn’t verify it in anyway.

  30. I probably should have been a bit more explicit about describing the alternative model. It isn’t to get rid of firewalls – it is to put them on the end host itself, and remove them from the network.

    I believe a sensible network administrator would deploy both a centralized firewall and end-host firewalls. The policy for the former can then focus on keeping out the kind of threats that originate from outside the local network–primarily DDoS–while the latter would be tailored to deal with “inside” threats–including hackers intruding via surreptitiously installed APs.

    I’ve already alluded to a particular nightmare scenario that end-host firewalls, alone, deal with very poorly: an important internal server being DDoS-attacked via the Internet, and unreachable even to the policy distribution server, let alone urgent clients. This scenario is made possible courtesy of the Internet’s complete lack of sender authentication or receiver-initiated throttling capability–a perfect example of the “end-to-end” principle in action. A centralized firewall takes care of the problem quite nicely–at the cost of violating the end-to-end principle.

    To paraphrase my earlier question: are you really so enamored of the end-to-end principle that you’re willing to expose yourself to this nightmare scenario, just to avoid violating it?

  31. “There are select few mechanisms to find and hold accountable people who abuse the network.”

    “Senders can identify themselves (or misidentify themselves) as they please, and the network simply makes its best effort to deliver packets to the requested destination, with no guarantee that it will succeed, or provide correct sender information. This is a huge network security headache, one which greatly facilitates DDoS attacks.”

    Any system for globally providing “accountability” endangers freedom on the Internet, by giving some central authority the power to identify and sanction individual users on any basis they please. The “accountability mechanism” will turn into another device of state oppression in places like China and North Korea, and in democracies it will turn into a nightmarish hodgepodge as every special interest with an axe to grind from far-right religious groups to the RIAA demands some agenda or another be enforced through it. The result will both be erosion of freedoms we’ve long taken for granted, and massive slowdowns and technical headaches as some central system tries to be everything to everybody at once. It wouldn’t scale, anyway.

    I find it curious that none other than me has raised or addressed the freedom issue in these comments thus far; although it is critically important given the multiplication in recent times of organizations who want Internet regulation “for our own good”. Microsoft’s “IRM” DRM system, RIAA and MPAA pogroms against filesharers, “broadcast flag” advocates, law enforcement agencies that want wiretapping capabilities, “trusted computing” … if they all have their way there’ll be cops at every street corner (router) and in every end-user’s box (a “trusted computing” chip) dictacting how they may and may not use their own hardware and data. Surely all of us here agree that we do not want this, and that anything that enables massive, intrusive network regulation will inevitably be perverted to be used for actual intrusive network regulation and every one of us here will lose more than we might stand to gain?

    If the threat of DDoS is the price to pay for freedom then it’s not a very high price IMO. Anyway, DDoS vulnerability itself results from violating the end-to-end principle and making something or another dependent on a central server, which becomes a DDoS target. We need more decentralization, not less.

  32. If the threat of DDoS is the price to pay for freedom then it’s not a very high price IMO. Anyway, DDoS vulnerability itself results from violating the end-to-end principle and making something or another dependent on a central server, which becomes a DDoS target. We need more decentralization, not less.

    I’m not entirely without sympathy for your point of view, and perhaps there’s a legitimate role for your idealized purely peer-to-peer network (even more so than today’s Internet). My complaints are that (1) a more accountable, secure, reliable Internet would be extremely valuable for many applications, and (2) “end-to-end” fanatics notwithstanding, it’s simply not possible to build such a network on top of the current Internet without massively violating the “end-to-end” principle.

    I don’t know how to solve this problem–I’m a security guy, not a networking person. All I’m looking for, really, is recognition that the problem is real, and not some kind of religious heresy. If I concede the potential value of your preferred brand of Internet, will you at least consider the possibility of some merit in mine?

  33. “a more accountable, secure, reliable Internet would be extremely valuable for many applications”

    Yes, indeed — the RIAA, right-wing governments, anti-pr0n crusaders, and lots of other snoops and would-be tech and communications regulators would *love* such an Internet. But in a democracy, the needs of the majority must overrule the whining and power-grabbing behavior of special interests, and a free Internet is thus preferable to a “secure” (by which you seem to mean accountable, and therefore regulatable) Internet.

  34. Final words on the topic — but alas, not my own:

    “They that can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” – Benjamin Franklin