November 21, 2024

New Internet? No Thanks.

Yesterday’s New York Times ran a piece, “Do We Need a New Internet?” suggesting that the Internet has too many security problems and should therefore be rebuilt.

The piece has been widely criticized in the technical blogosphere, so there’s no need for me to pile on. Anyway, I have already written about the redesign-the-Net meme. (See Internet So Crowded, Nobody Goes There Anymore.)

But I do want to discuss two widespread misconceptions that found their way into the Times piece.

First is the notion that today’s security problems are caused by weaknesses in the network itself. In fact, the vast majority of our problems occur on, and are caused by weaknesses in, the endpoint devices: computers, mobile phones, and other widgets that connect to the Net. The problem is not that the Net is broken or malfunctioning, it’s that the endpoint devices are misbehaving — so the best solution is to secure the endpoint devices. To borrow an analogy from Gene Spafford, if people are getting mugged at bus stops, the solution is not to buy armored buses.

(Of course, there are some security issues with the network itself, such as vulnerability of routing protocols and DNS. We should work on fixing those. But they aren’t the problems people normally complain about — and they aren’t the ones mentioned in the Times piece.)

The second misconception is that the founders of the Internet had no plan for protecting against the security attacks we see today. Actually they did have a plan which was simple and, if executed flawlessly, would have been effective. The plan was that endpoint devices would not have remotely exploitable bugs.

This plan was plausible, but it turned out to be much harder to execute than the founders could have foreseen. It has become increasingly clear over time that developing complex Net-enabled software without exploitable bugs is well beyond the state of the art. The founders’ plan is not working perfectly. Maybe we need a new plan, or maybe we need to execute the original plan better, or maybe we should just muddle through. But let’s not forget that there was a plan, and it was reasonable in light of what was known at the time.

As I have said before, the Internet is important enough that it’s worthwhile having people think about how it might be redesigned, or how it might have been designed differently in the first place. The Net, like any large human-built institution, is far from perfect — but that doesn’t mean that we would be better off tearing it down and starting over.

Comments

  1. A few points:

    * Redesigns that force “accountability” or “identification” would destroy the greatest vehicle for anonymous free speech in the history of mankind. That would be a travesty of the worst kind, and perhaps the death knell of liberty itself.

    * The capability of the Internet for file-sharing is a feature, not a bug. The very purpose of the Internet is to enable someone who wants to ship some bits to someone else who wants to receive them to do so, and the receiver to receive them. Any redesign that “addressed” the “problem” of “unauthorized” file sharing would result in an Internet that failed at its primary goal.

    * The entertainment industry may not like that, but too bad, so sad. Their monopolistic gravy train ride is over. It’s time for them to stop pining for the old days and start earning an honest living like the rest of us.

    * A free system will always have some amount of people throwing rocks through windows, trampling the grass, and occasionally even mugging people or getting into fights. The only way to eliminate all such is to create a fascist/Stalinist system, which no sane person truly wants.

    * Why are there two slightly-different versions of one of the comments here? If people are posting a comment that succeeds but seems to have failed, then retrying, then there’s a bug in the site design that results in it incorrectly indicating failure sometimes on success. That should be looked into.

  2. From Forbes:

    For A Poisoned Internet, No Quick Fix

    Kaminsky, a researcher with security firm IOactive, announced last summer he had discovered a vulnerability in the Domain Name System, the piece of the Internet that converts domain names to the IP address where they’re hosted. That fundamental flaw in the Internet’s underpinnings would allow cybercriminals to perform an attack Kaminsky called “DNS cache poisoning,” invisibly redirecting Internet traffic at will, potentially funneling users to undetectable phishing sites or even intercepting e-mail or digital voice communications.

  3. It doesn’t make sense to talk about how to “secure the endpoint devices” when most of the problems in today’s Internet are, in fact, due to intentional misbehavior on the part of endpoint devices. Spammers, scammers, phishers, etc. know exactly what they’re doing with their endpoint devices, and these devices are working as intended — or at least as the miscreants intended to program them. P2Pers know that they’re illegally pirating intellectual property and that they’re hogging bandwidth.

    The only way to rein them in is to implement security, terms of service, and quality of service in the middle of the network. Misbehavior occurs at the ends; if it can be stopped, it must be done in the middle.

    • A lot of (or most?) spamming, scamming, and phishing are enabled and supported by botnets, which are clearly a case of insecure endpoint devices. In other words, the users of these insecure endpoint devices are often unaware that their device is being used for malicious purposes. So solving the endpoint security problem (however impossible this is) would eliminate a lot of spamming, scamming, and phishing.

      The other case, of illegal copying of copyrighted stuff, is done by well-aware users. But I doubt that anyone can design a network that allows transfers of large amounts of public domain/copyleft/… media without allowing the transfer of large amounts of illegal-to-copy media.

  4. “The plan was that endpoint devices would not have remotely exploitable bugs.”

    That has never been anyone’s plan. Thank goodness, because it would have been a crazy plan! Even back in the very beginning, people were saying things like “the only way your computer can be completely secure is to be in a bunker a mile underground with not power or network connections, and no users.”

    The “plan” (if there was one) was, and has always been, security that is *good enough* to keep the system working for a *good enough* amount of the time, for a *good enough* proportion of the network. and that plan has succeeded in spades.

    I can believe some security consultants buying into the crazy plan, though. Security consultants, like politicians, often seem to get into a very Christian mindset of good vs evil, malicious vs benign. But real life’s not like that. Is a browser evil? What if it’s one of many thousands slashdotting your website? Is a Google-bot bad? What if a malicious or ill-trained employee put your company secrets on your website? Is running an SSH server bad? What if you have a weak root password?

    The concepts of flash crowds, bottlenecks, choke points, weak links, trojan horses, malicious insiders, etc have been around long before the internet, and network designers have always been well aware of them, and didn’t ascribe such problems to malicious but tragically not defensible endpoints. They ascribed them to traffic, to bugs, to benign-but-human users, and, yes, to malicious users. Which is a good thing. This is why the internet, while not perfect, is as rugged as it is.

  5. Mitch Golden says

    One additional issue that would arise if the net were redesigned is that non-security concerns would get snuck into the redesign. There is no way the RIAA and MPAA would allow the current network to be deployed, if it were designed today. There would be tremendous pressure put on to build copyright protections into the underlying protocols.

    Moreover, if the network were redesigned, it would likely include the sorts of provisions for the sorts of non-network-neutral features that the carriers want.

    Even if there were some security miracle available by making changes to the internet, there’s no way it could be created now.

    • You mean like how The Man snuck all those evil features into IPv6? Wait, they didn’t.

      • Mitch Golden says

        IPv6 was designed in 1998 (http://en.wikipedia.org/wiki/IPv6) well before any of the affected industries had any interest in these matters. Right now, they are busy trying to retroactively put them into our current network, with the collusion of some of the carriers, especially AT&T. The problem (from their point of view) is that the exact things that make the current network “insecure” are what make it possible to do P2P filesharing, encrypted if necessary, and building these “features” in after the fact is proving to be difficult if not impossible. But if the network were redesigned from the ground up, it would be quite a different story.

        • the exact things that make the current network “insecure” are what make it possible to do P2P filesharing

          I don’t follow this at all. I’ve run P2P, behind a firewall, with many things locked down at the firewall. The things that enhance my security (OpenDNS, Little Snitch, Web of Trust) don’t interfere with P2P, the firewall doesn’t interface with P2P. Cellphones? All dumb, or on plans that compel them to be dumb. That has nothing at all to do with P2P. My largest current vulnerability is having kids at home using computers, and what things they might agree to — that is, PBKAC. The tricky part with the firewall, is knowing what ports to open, and for whom they are being opened, and that is mostly human factors (QPNP could be made a little less trusting — it needs to behave more like Little Snitch, and only allow applications to open ports, or to access ports, if the user says so with an admin password).

          One thing that might make things more secure, would be to remove some of the laws that make it difficult for white hats to (for example) deploy innoculating bots. As it stands, if running a botnet is criminal, only criminals will run botnets, and the law is not exactly stopping them, so why not do away with the law? Legalizing botnets would make it much more painful to run an unsecured system, because casual vandals would attack, and the higher volume of competing bot-vs-bot wars would tend to make those systems less stable. Of course, Microsoft would be free to attack Apple, and vice-versa, so this could get plenty amusing.

        • You mean, they actually got it finished and working?

          Good work whoever that was…

  6. To echo some of the previous comments, the key missing feature in the Internet is accountability. While accountability certainly doesn’t solve the problem of insecure end hosts, it helps defend against a number of classes of attack, most notably denial-of-service attacks. It needs to be built into the network in order to be effective. It clearly wasn’t carefully considered by the early Internet pioneers, who were thinking of their network as linking mutually trusted end hosts administered by competent, mutually trusting administrators. And fortunately, it appears to be one of the areas of focus of the Clean Slate project. (My take on the problem is here, for what it’s worth.)

    It’s true that changing the underlying design of the Internet won’t cure the problem of insecure end hosts–or the problems of war, disease, famine, bad breath and talking in movie theaters, for that matter. But criticizing efforts to fix real problems with the Internet’s design and operation because they only solve certain pressing problems, and not others, seems just a bit unfair.

    • Frater Plotter says

      “Accountability” is one of those great words that means anything you want it to mean.

      Some people say “accountability” when what they mean is “Let’s destroy anonymity, lock up the Wikileaks people, and turn over ISP logs to the Church of Scientology, ensuring that Internet users are ‘accountable’ for any harm they may do to the status quo.”

      And then, on the other hand, some people say “accountability” when what they mean is “Let’s hold end users strictly liable for the spam, viruses, and DDoS emitted by their cracked computers, ensuring that the only people who can use the Internet safely are Theo de Raadt and D. J. Bernstein.”

      Lastly, some people say “accountability” when they mean “Let’s hold programmers liable for the damage caused by security holes, ensuring that only rich corporations can afford the insurance necessary to produce software.”

      So what kind of “accountability” do you mean?

      • Actually, we define our notion of accountability in the paper I linked to. It consists of two properties. To quote:

        “1) Identi?cation: the originators of traf?c can be identi?ed by some persistent attribute—that is, one that is relatively dif?cult to create, re-create or change. The originator’s IP address itself might be dif?cult to create or change, for example—or it might be easy to create or change, but reliably associated, at any given moment, with another more permanent attribute (e.g., legal name or credit card number).

        “2) Defensibility: destinations are able to prevent traf?c from a source with a particular address or persistent attribute from affecting their use of the network. Note that defensibility requires, but does not necessarily follow from, identi?cation: a network that doesn’t provide identi?cation cannot provide defensibility— since the latter requires that traf?c be distinguishable by originator—but a network that provides identi?cation can still fail to provide defensibility (and hence, full accountability).”

        This notion of accountability approximates the kind associated with the (pre-VoIP/SpIT) telephone network, which has had to deal with various forms of unwanted traffic for over a century. While it’s true that it may imply mildly adverse consequences for people who insist on allowing their property to damage other people’s property (as per your second caricature above), I view that attribute as a positive feature, since it aligns the incentive of botted host owner and bot victim better than today’s unaccountable system.

        • How does making the owner of a cracked machine pay for botnet damage align their incentives better with those of bot victims? They’ve already lost much of the use of their machine and any personal information stored on it, and for most of them if they could avoid being hit by zero-day exploits they would. That’s like making residents of lead-paint-contaminated apartments “accountable” for any damage done by the lead without giving them the resources to do anything about it.

          It also seems to me that the claim about defensibility following from identification is not quite right. As long as endpoint machines are not perfectly secure, it’s only the machine identity that enables you to defend against attack. Knowing the name or credit card number of the person who owns a machine — or even is currently sitting at the keyboard — doesn’t necessarily tell you anything useful about stopping an attack coming from that machine or blocking bad behaviors from people who might use the machine.

          • On the contrary–most bots are minimally intrusive, so as to avoid messing up the host so badly that the owner actually does something about them. In fact, it’s not hard to imagine botnet operators mildly incenting host owners to be botted, in a world of perfect end host security–see footnote 2 in the paper.

            The point about “persistent attributes” is simply to allow the network to enforce a source-to-destination blocking policy that can’t be dodged as easily as, say, changing an IPv6 address. The persistent attribute doesn’t even have to be known end-to-end, as long as it can be used for this purpose. Again, I suggest reading the paper.

    • The first problem is that the packet sender gets to choose where the packet goes but the receiver usually pays the bandwidth cost.

      The second (closely related) problem is that anyone can send to anyone regardless of whether the traffic was wanted or unwanted. I believe that the fix is going to need to involve some system of invitation, where the routing works differently so you can’t send anything until someone else has requested it.

      Think about the subtle but important difference between how email works and how twitter works (not that I’m a great twitter fan but they have stumbled onto a significant refinement in the routing of information). With email, I type a destination and I push SEND and off it goes to that destination. With twitter, I type a source and I LISTEN to the source. Twitter is very hard to spam because people only listen to what they choose to listen to. Email is easy to spam (as we all know).

      I think the future of routing is going to be something more like twitter and less like email. We have been going at the problem backwards.

      • “The first problem is that the packet sender gets to choose where the packet goes but the receiver usually pays the bandwidth cost.”

        Nope. Sorry. Not only does the packet sender pay a bandwidth cost as well, but by at least one sensible reckoning they pay more. Most of us pay some monthly amount for a fairly wide down-pipe sitting next to an up-straw, and most of us have caps or surcharges for going over a certain traffic level, with up-straw usage counted towards that cap. So it’s reasonable to infer from this that you pay more per Mb/s uplink than you do per Mb/s downlink.

        “I believe that the fix is going to need to involve some system of invitation, where the routing works differently so you can’t send anything until someone else has requested it.”

        This, of course, would destroy the freedom of the internet and therefore will never fly. An awful lot of applications, including the entire web, depends on unsolicited traffic being received and accepted by remote hosts, for starters; furthermore, enforcement of such a scheme would require destroying the usefulness of the Internet as a vehicle for anonymous free speech, which destruction should be anathema to all right-thinking people.

        “With twitter, I type a source and I LISTEN to the source. Twitter is very hard to spam because people only listen to what they choose to listen to. Email is easy to spam (as we all know).”

        This is an argument for developing sender authenticated e-mail protocols, not for compromising the free nature of the underlying architecture of the net.

        It’s easy to build an authenticating system on top of an anonymous one. It is pretty much impossible to build an anonymous one on top of an authenticating one.

        If we want to have both exist, then the basement layer has to be anonymous. Arguably, more so than the current basement layer is.

        • OK, which wiseass stripped my formatting? I had italic tags around “more” right near the end.

        • Anonymous says

          Go back and look at how twitter works, it is nothing like sender authenticated email. Not even approximately similar.

          • Anonymous says

            How rude! Don’t use that condescending tone with me ever again.

            The similarity is that Twitter lets you avoid receiving any messages from senders you don’t want to listen to. So would a sender-authenticated email system. Present email means you can receive all manner of junk from anyone, and have to do your own filtering for the most part.

            The different underlying architectures (Twitter has a “pull” model; sender-authenticated email would be “push”, but with perfect automated filtering of unapproved senders) is not relevant here, only the end result of being able to dodge undesired messages.

  7. NY Times and their readers, often it’s the blind leading the blind over there, as far as the web goes.

  8. I think the intent of the pipe was to send water, not a massaging showerhead through.

    Because of the large (mis)usage of scripting and active content (and things like ActiveX which are “signed” but not sandboxed) instead of sending text, sound, images, or other things which can have clear boundaries, the web too often sends code which has to be executed.

    Often simply proxying things (even putting a router between with outbound disabled by default) can limit the damage.

    You cannot make the endpoints secure when part of the defined function of the endpoint is to execute arbitrary code. I think they assumed no one would be stupid enough to do that (except things like transfer and compile).

    But for that matter, too many people have “auto-run” on on their CDs. How does anyone know it do something like “rm -rf /” if they insert it into their PC?

  9. I agree that redesigning the Internet from ground up might not be desirable or even achievable. But not all Internet security problems are fixed by simply securing the endpoints.

    One security problem that needs to be addressed in the network and cannot be solved at the endpoint devices, in my opinion, is that of denial of service. Even if all endpoint devices run software that does not have remotely exploitable flaws, you can still have a device or software that is malicious by design. Put multiple malicious endpoints together and they can launch denial of service attacks. To my knowledge, you cannot secure an endpoint against denial of service attacks without help from the network. So beyond improvements in the application-layer protocols (SMTP, HTTP, etc.), we also need improvements in the lower-level protocols, to handle (D)DoS.

    • If people actually had to put multiple malicious endpoints on the net, instead of subverting other people’s machines and turning them malicious, DDoS would be rather harder to scale. Instead of costing a few thousand dollars in hacking time, a botnet would cost a few million or tens of millions to set up. And if spoofing were also somewhat more difficult, responding to a DDoS attack would be that much easier.

      Right now all the numbers are stacked in favor of people wanting to do bad things on the net. But at least some of the modeling that’s been done (like the estimates for the profitability of phishing) suggest that evildoing still isn’t particularly a gold mine. It might not take that much of a change in incremental costs to make most attacks unprofitable, leaving only the highest-value targets at risk.

  10. The plan was that endpoint devices would not have remotely exploitable bugs.

    The easiest solution is to redefine these as features. Then the plan can be judged to have been very successul.

  11. Thanks for expressing the issue succinctly.

    Security people have been on this one for years with relation to electronic commerce: If I am ever strictly legally responsible for my electronic signature, under the current regime I am completely screwed. There are just too many ways for people to get hold of it by hacking endpoint devices or by hacking me. When I saw the “gosh, it would all work if everyone just had solid identification” my only thought was “what a nice set of high-value targets.”

    That said, there may well be some use to restricting spoofing as part of a defense in depth, but we all knew that already, didn’t we? Oy.

  12. I believe the end devices problem must be taken further and extended to include humans. Most of the current internet security problems are based on the human factor (as Kevin Mitnick pointed out in The Art of Deception).

    Also, to my knowledge, most of the security issues (the exploitable, technical ones) over the years have manifested through then one web browser that has maintained dictatorship over the Internet users for years. With the development of Open Source software solutions, that are a lot easier to debug and test for security flaws, the Internet will, I believe, enter a era of less devilish usage and more knowledge sharing.