March 19, 2024

Internet So Crowded, Nobody Goes There Anymore

Once again we’re seeing stories, like this one from Anick Jesdanun at AP, saying that the Internet is broken and needs to be redesigned.

The idea may seem unthinkable, even absurd, but many believe a “clean slate” approach is the only way to truly address security, mobility and other challenges that have cropped up since UCLA professor Leonard Kleinrock helped supervise the first exchange of meaningless test data between two machines on Sept. 2, 1969.

The Internet “works well in many situations but was designed for completely different assumptions,” said Dipankar Raychaudhuri, a Rutgers University professor overseeing three clean-slate projects. “It’s sort of a miracle that it continues to work well today.”

It’s absolutely worthwhile to ask what kind of Net we would design if we were starting over, knowing what we know now. But it’s folly to think we can or should actually scrap the Net and build a new one.

For one thing, the Net is working very nicely already. Sure, there are problems, but they mostly stem from the fact that the Net is full of human beings – which is exactly what makes the Net so great. The Net has succeeded brilliantly at lowering the cost of communication and opening the tools of mass communication to many more people. That’s why most members of the redesign-the-Net brigade spend hours everyday online.

Let’s stop to think about what would happen if we really were going to redesign the Net. Law enforcement would show up with their requests. Copyright owners would want consideration. ISPs would want some concessions, and broadcasters. The FCC would show up with an anti-indecency strategy. We’d see an endless parade of lawyers and lobbyists. Would the engineers even be allowed in the room?

The original design of the Internet escaped this fate because nobody thought it mattered. The engineers were left alone while everyone else argued about things that seemed more important. That’s a lucky break that won’t be repeated.

The good news is that despite the rhetoric, hardly anybody believes the Internet will be rebuilt, so these research efforts have a chance of avoiding political entanglements. The redesign will be a useful intellectual exercise, and maybe we’ll learn some tricks useful for the future. But for better or worse, we’re stuck with the Internet we have.

Comments

  1. A Clean Slate researcher from Stanford recently gave a talk at my university, and the whole thing was a thinly veiled attack against Network Neutrality. I blogged about my discussion with the speaker and his weak response: artificialminds.blogspot.com

  2. Tom Welsh says

    Just this morning, I glimpsed such a discussion in a British newspaper. One “obvious requirement” was that no one should be allowed to use the Internet without their *age* being visible to everyone (as part of a complete identity package). Why? Well obviously, to stop “pedophiles” from “grooming” young people. No doubt sex (sorry, gender), political affiliation, criminal record, and of course “ethnicity” would soon follow.

    Quite apart from aesthetics, the design of both the Internet and the Web are technically superb precisely because they are so minimal.

    “You know you’ve achieved perfection in design, not when you have nothing more to add, but when you have nothing more to take away”.
    – Antoine de Saint-Exupery

  3. The problem is that what you propose would be perverted into a way to tier internet access so that lots of things were more expensive than they are now, even at the same quality level. Pay extra for crystal clear VOIP — no problem. Pay extra just to have the same VOIP you currently don’t pay extra for? No fair. Likewise limiting the choices by not allowing “current quality” VOIP so you have to choose either none or the premium high quality sort.

    More generally, I don’t want to have to manage paying for every network my packets travel through. Right now, whatever those costs are, they are all subsumed into a single internet bill. And I do get the feeling that your “better billing systems” would turn into my having shittier net access for a higher price than I do now, all so some guys could pay even more to VOIP without any fuzz in their audio. Thanks, but no thanks. Maybe a better solution is for real-time-critical stuff to have its own separate network, perhaps based on dedicated circuits rather than packet switching. So you’d have these handsets connected to their own lines, and you’d punch in some kind of routing code that would find some available segments and form a circuit to the other end … I think maybe this separate device ought to have a name. How about “telephone”?

    🙂

    Seriously, though, a dedicated network might make sense. Or all this might go away with some further infrastructure growth. I’d much rather have bandwidth too cheap to meter than some complicated interlocking system of corporate contracts and fees to wade through, and not enough money to pay it all.

    A few changes to FCC policy re: broadband competition and we’d have actual broadband competition, and you’d see prices even out while quality jumped up as they all competed to provide better service. Imagine that?

  4. “Net neutrality” is a pipe dream, it never existed and it never will. The “Byzantine mazes of layered fees” is basically what I said in the first place — we haven’t yet developed suitable billing models to support multimedia over IP. It isn’t a moral problem, nor an engineering problem (nor a legislative problem) it is an accounting problem. Until we have an easy-to-implement technique that allows billing more for enhanced routing options (all the way down the chain) then what we will get is plain vanilla queuing without any enhancements (what some people call “net neutrality”).

    As for your distributed VPN plan…

    Email dynamics are completely different to VoIP. Email can get there at anytime so if a link is congested then just wait around and try later. Store-and-forward is fine for non-realtime data (in fact, store is good because you can store longer if it happens to be convenient). With realtime VoIP data, the situation is quite different… changing your route will guarantee an audio drop-out as one route will have a different lag to another route (even worse if you switch route as a response to congestion because there is some time required to “realise” that the link is in congestion and then get the message out that the route must change). There’s a further problem that congestion is one of those fractally 1/f power-law functions so you get these momentary bursts of congestion that turn up for no reason that anyone can explain and then vanish again a second later. Not quite enough time to re-route your VPN but just enough time to stuff up the audio stream.

    But all of this can be solved in a much better way by designing queuing algorithms (which we already know how to do). From an engineering standpoint (i.e. if the whole world was a socialist dictatorship and one engineer was given the job of achieving the best result with finite resources) then placing sensible prioritised queues at every router would be the right answer. In a non-dictatorship capitalist world we have some additional factors: preventing an end-node from gaming the system and getting more than what they paid for; and ensuring that the bills are paid.

  5. The problem is you’re arguing in support of ending net neutrality, which will just lead to all kinds of “can’t get here from there” incompatibilities and Byzantine mazes of layered fees … and ultimately, everyone demanding their cut resulting in paying through the nose for what we currently get cheap.

    The real fix is for your voip provider to have a VPN with its own internal routing. They could have four or five centers in geographically-dispersed major cities, all on different network backbones, and adaptively route calls, and then congestion or other problems along one route would be avoidable or get averaged out. Consider a simpler case: an email. If you send it directly from New York to San Francisco, it might run into problems. If you send a copy to Miami to someone with instructions to forward it to San Francisco, and another to someone in Seattle likewise, plus a direct one, the odds are that the San Francisco recipient will get at least one of the three. In fact, Usenet works something like this under the hood. Or tries to at any rate. Nothing needs to be changed about the underlying network to use this trick, either; you just need to have a geographically dispersed presence and local internet access at each site — as a long distance provider will anyway.

  6. I’m using a VoIP phone as my regular house phone. It goes over wireless internet to a tower somewhere (uses Raytheon phased-array technology, very cool) so I have no actual copper phone line, I pay no “last mile” tax to the phone company. In effect, it’s a mobile phone that also provides Internet (but in a bigger package so I don’t take it in my pocket) and it does work in the car (only tried it once, someone else was driving).

    Compared to a fixed copper line plus ADSL, my setup costs me more monthly fees (about 50% more) but the call costs are much lower (10c nationwide untimed calls, international approx 2c per minute). The call quality is mostly good but it does drop audio from time to time (maybe 5% of the time, maybe less, I haven’t kept close notes). It means that my expenses are predictable and I don’t think twice about spending time on the phone.

    Compared to a regular mobile phone, the call and data costs are heaps lower but I’m not as mobile. Data costs through mobile phones are falling so we will see if people start using their mobile handsets as their main IP feed (that might be interesting).

    I have no QoS on the IP data that comes into my network from the outside world. I have no way of asking for QoS on this data and even if I would be willing to pay extra for a phone with less drop-outs I can’t do it through an IP network because there isn’t a billing model to support that. Even if my ISP was willing to let me prioritise some of my packets for a price, that only helps me to the edge of THEIR network… so I need to then start negotiating with the next layer (who don’t bill me directly) and see what they can offer. If I get shitty call quality after paying for QoS priority packets then I don’t know who to blame because it has gone through so many networks. Also, should the sender pay for the QoS (they are the one who puts the flags on the packet)? If the sender pays then how does my local ISP (on the receiving end) get their share of the money?

    Eventually it will happen, but right now the ISPs just use a simple monthly charge plus volume charge formula and all traffic is the same price.

    I would also be very happy to be able to put email and big downloads into a category of cheaper traffic that routers can arbitrarily delay or throw away when it suits them. I don’t care if the email is slower, especially if it saves money. Again, I can’t actually buy a billing plan with that capability because all traffic is the same price.

  7. We don’t need “suitable billing methods” that will lead to spiraling prices even as costs continue to plummet, like we already have in the cable/satellite TV sphere.

    What we need is to terminate the conflict of interest that inherently exists in letting companies provide both Internet access and any of the following: content, TV service, telephone service, …

    Unfortunately, right now removing those appears to leave only slow dial-up ISPs.

    The nature of ownership of “last mile” connections has to change. Since they connect to individuals’ houses, maybe they should own them, perhaps indirectly by way of the municipality owning them. Then companies provide services carried over those last mile connections, but the connections themselves aren’t owned by “the phone company” or whatever, and phone, television, and internet providers may all vie for a homeowner’s business.

  8. There are two generally accepted ways for non-endpoint routers to apply “backpressure” on TCP connections: best method is to delay the ACK packets by up to several seconds but don’t actually drop those packets (this requires memory in the router to store the queue, long delays result in a retransmit which you don’t want); not-as-nice method is to edit the sliding window in the packet to make it look like the receiver is asking for less data from the transmitter. Neither of these methods is actually a design feature of TCP but any protocol designed around a sliding window acknowledge system will end up open to the same “backpressure” use and abuse.

    Of course, IP is perfectly good for carrying multimedia data or realtime data or anything you want but most ISPs don’t have billing models that encourage them to care about quality of service… worse yet, many of the core routers are owned by telephone companies who have a huge financial interest in ensuring that IP phone calls are as bad as possible. The TOS system is perfectly sufficient for high quality IP phone calls, multimedia and what have you but hardly anyone ever implemented it. The Diffserv system was a backwards step from TOS but in principle it’s good enough — still completely useless until it actually gets implemented, and it never will get implemented until suitable billing models are developed.

  9. How long do comments normally “await moderation”, and what causes them to? One of mine here has been “awaiting moderation” for about two days now, and it is forcing me to leave a browser tab here to ensure I can preserve a copy and try to resubmit it if necessary. Nothing in it is suggestive of spam — in particular, there’s no URLs or HTML, so I’m not even sure why it’s going through quarantine to begin with. However, if random articles with no suspicious content are going to be subjected to this, they need to pass through the process involved in a timely manner, since to avoid data loss until a comment is definitely successfully posted the browser has to be left open to a page with the comment in some form amenable to being copied to the clipboard (either in the edit box or displayed as “awaiting moderation”). (Of course, if the comment is ultimately rejected, I’ll edit it to fix whatever is deemed unacceptable about it before resubmitting it, but that obviously requires I preserve a copy to edit. It also means any rejection will have to include an explanation of the reason.)

  10. Live broadcasts over the Internet seems “broken”. I’m not sure that’s a problem — there are only a few events that people really want to watch in real time with minimal delay, typically breaking news and sporting events, and the existing television infrastructure is perfectly adequate to the task. Anything else, people are better served by downloadable-on-demand, later watch-when-you-want-to video. Same applies to other multimedia, such as standalone audio.

    Hiding the routing is important for other robustness characteristics and to make censorship more difficult. Redundant routing of critical data can easily be implemented over the existing net. If you want to send data from a center in New York to one in LA with redundancy, you can have additional relays set up in Tampa and Vancouver, say, and send a copy to each of these (routed the usual way under the hood), and each forwards what it receives to the server in LA. The one in LA discards the second copy, or perhaps compares the two copies when it receives both and issues a warning if they differ. (But garblement can be detected anyway by the use of MD5 hashes or other checksum methods, and with more sophisticated codes, even corrected up to a certain point.)

    In fact, Usenet achieves redundancy in exactly this way already — a given news server may receive an article by any of several possible propagation paths, and conflates any duplicates it receives (messages with the same message-ID) from different directions. Usenet proves that we can achieve Kevin’s second point without re-engineering the net from the ground up. And Usenet is over thirty years old, so we’ve had this for a while now.

    Absolute anonymity is not something any big organization (government or private) would ever consider a priority when redesigning the net — in fact, easy traceability is far more likely on the agendas of such organizations. Again, a redesign is fortunately unnecessary: we have TOR routers and Freenet already, layered over the existing infrastructure and designed to defeat traffic analysis and other methods of tracing. Useful for all those Chinese dissidents, and maybe, soon, for American ones critical of Bush and co.

  11. The most important thing needed for a new internet is absolute anonymity. It is too easy now to log an IP address and track it back to a user. Hopefully, if this “grand rearchitecture” takes place, this will be addressed.

  12. While I totally agree that the physical Internet will not be scrapped, one of the enticing possibilities is to build entirely new protocols on top of the MAC layer that could essentially re-invent the Internet in-place and there could be a gradual switchover.

    My conversations with CS researchers all over the world have led me to believe that there are two problems with the existing Internet that really do need to be addressed.

    1. TCP and IP were not designed to carry multimedia, and yet they are used frequently to do just that. We try to engineer QOS solutions to work on top of them, but we’re really just hacking around a fundamental problem. Especially for live broadcasts, it’s just broken. Yes, several companies have introduced alternate protocols for media transmission, but they’re all still running on IP.

    2. In a similar vein, TCP and IP were designed to completely hide underlying routing. But sometimes for reliability and performance purposes you want to be able to do detailed packet routing. A great example is when you want to specify that redundant copies of data need to take completely independent paths so that there is no single point of failure in the network. We can’t do that today.

  13. Doesn’t the modern Internet already use backpressure as well as packet dropping? I seem to recall there’s something in present TCP implementations that leads to well-behaved senders rate limiting if they get some kind of feedback from downstream regarding congestion. In any case, depending solely on backpressure would be a bad idea since a badly behaved (whether buggy, misconfigured, or actively malicious) host disregarding the signals would create a massive DDoS attack not just on a target host but on the routing infrastructure itself. You’d need a safety valve, and the obvious such is to drop packets from hosts that disregarded backpressure — a router would tell sender X to rate limit to Yb/s, and if it didn’t, start dropping some packets from X so the remainder came out to around Yb/s.

    (Actually, I expect backpressure would be implemented with reference not to the ultimate sender, but the immediately preceding node in the hop chain, with the effects propagating backwards along that chain. Still, a router expected to rate limit its outbound links as indicated by backpressure from the next hops along these, and its own internal capacity constraints, that gets more than it wanted from inbound links, will have to cope somehow, and dropping some of its inbound packets is the only apparent coping strategy. Naturally, dropping inbound packets from noncompliant senders first before resorting to dropping packets from anywhere else makes sense; this time these senders are just one hop upstream of course.)

  14. kaukomieli says

    we would see the typical second-system effect…

    http://en.wikipedia.org/wiki/Second-system_effect

  15. Curious, you can tunnel anything over anything as long as you don’t care what the performance is. But some people working on these “clean slate” projects do care about performance, and thus they’re building GENI instead of tunneling over IPv4.

    Example #1: Imagine a clean-slate protocol that requires all links to be reliable (given that many link layers are reliable today, this may be a reasonable assumption) and uses backpressure rather than packet dropping to handle congestion. You could tunnel that over IPv4 (perhaps using TCP connections as virtual links), but the characteristics would be quite different from running the same protocol over physical links.

    Example #2: If you want to design some sort of hard real-time protocol that assumes each link has bounded delay, I don’t think you could tunnel over the Internet at all.

    But many projects are tunneling over IPv4, e.g. I3 and RON.

    BTW, I am surprised at the pessimistic and sometimes paranoid tone of comments on this topic (both here and in other forums). Should we dismiss university research before it starts because some fascist businesses or politicians might possibly hijack it in the future?

  16. This is the patently absurd type of article, that I am glad to find on your site. . . Your commentary on the human element is right on. An I for one like it that way.

    Thanks,
    C

    BTW:
    I had your site bookmarked, now I came back and subscribed via RSS so I can keep up with the latest.

  17. Stuart Lynne says

    A re-design from scratch will be as successful as the last one, does no one remember ISO networking standards? X400 mail?

    Bizarre and immense standards that many people and companies spent inordinate amounts of time and money to design and try to implement. With little or no success. Despite being mandated as officially required for many (US) government departments in the 1980’s etc.

  18. A “clean slate”?

    A clean slate is when you redesign something from scratch, and remove all of the problems that evolved in the predecessor. Rather like new effective DRM on re-recorded movie disks, when new disk formats are introduced.

  19. I know this is a bad question because journalists always fumble the compu-tech language, but here goes:
    What does it mean to start with a “clean slate”?

    Basically, couldn’t one build the entirely new “Internet” on the back of the old (IPv4) internet, by just assuming that layer represents the link layer? To rephrase, doesn’t the layered OSI model let us pretend that we’re building a new network at each level, because the upper layer only needs to know basic qualities of the lower layer (reliability, etc). So anyone who wanted to demonstrate their new-and-improved Internet could just do so on top of the existing one.

    If this is the case, then the idea that the new Internet and the old Internet are mutually exclusive is rubbish. Or it’s a deliberate attempt to do away with a free and open Internet in favor of an authoritarian scheme as has been pointed out. So it’s either ignorant or disingenuous.

    Am I missing something here?

  20. Bryan Feir says

    IPV6 is still planned to be rolled out. There are two primary stumbling blocks, however:

    A) Many of the solutions that were part of the raison d’etre for IPV6 have been ported back to IPV4 via classless inter-domain routing, network address translation, and the like, thus reducing the immediacy of the need for it.

    B) In countries which already have a significant IPV4 equipment investment, there is an understandable lack of will on the part of ISPs to replace all the otherwise working equipment.

    That said, given that Windows XP SP1, MacOX X 10.3, Linux and all the BSDs all support IPV6 (and in some cases have for nearly ten years), we’re at the level where at least most of the endpoints of the network will know what to do with it, which was previously another stumbling block.

  21. ed_finnerty says

    Does anyone know what happened to IPV6 – is it still planned to be rolled out or was it killed by the initiatives of various parties to include many of the “features” which Ed suggests would be considered in a new internet

  22. On the contrary, the Internet is constantly redesigning itself. It wasn’t long ago when the concept of a hyperlink was pretty exciting and most people were pushing stuff around with ftp. There was even a time before search engines.

    Of course, Internet Protocol is freely available with basically no restrictions and no cost. Anyone designing a network is faced with the decision to [A] use IP which is already done, costs next to nothing and has well understood behaviour [B] invent their own which costs lots, takes a long time, will have years of debugging and might perhaps work better.

    Sure enough, most people take the [A] option… and of those who take the [B] option, we don’t hear much from them because they usually run out of money or time or both.

  23. The new internet should be structured like cable TV, partitioned in 1000 channels with constant reruns of recycled crap, propaganda, and ads, and an additional 1000 pay-per-view/download “premium” channels with constant reruns of recycled crap, propaganda, and ads. And the content transfer should be implemented in such a way that you cannot skip the ads.

  24. dave tweed says

    One thing no-one’s brought up yet: all the agencies Ed mentioned are US agencies. Suppose some task-force with an overwhelming US presence were to start designing. I can easily imagine that countries who accept the current internet because “that’s the way things have developed” would suddenly feel an urge to “have their concerns addressed” (whether it’s “bad” totalitarian states or the just “different” European Union-type things). Given that I can’t see them having any luck with that (just due to organisational inertia if nothing else) and the fact that “there’s going to be change anyway”, the disincentive to other geo-political groupings producing their own systems is signficantly reduced and at best we need transcoding interfaces each time you cross between implementations. It’s worth appreciating the internet got popular enough to entrench before the geo-politics-minded people noticed.

    Look at the history of Exalead for an example of what can happen.

  25. “Good News,” there are many good arguments against a .xxx domain, related to free speech concerns, enforceability, etc. Seth Finkelstein has written some excellent articles explaining what a boondoggle it would have been for the .xxx registrar.

  26. “Good news, everyone”: Please read RFC3675: http://www.ietf.org/rfc/rfc3675.txt

    I could see a next-generation Internet arising much like the current one did — at first, a toy project (though a large-scale one) shepherded by researchers, then a useful but somewhat arcane tool for academics and techies, and eventually, once adopted by industry, a crucial infrastructure. Wishful thinking, probably, but there are a few problems in today’s Internet that I don’t see us digging ourselves out of any time soon…

  27. It’s about designing a centralized network like their cell and gsm networks. Internet cannot be broken easy and that is what politicians and various cartel members dislike about the current internet. For politicians it can be very interesting to be able to switch off certain segments of a network and cartel members can fake scarcity to keep prices high. Any redesign from scratch is about POWER, never forget that!

  28. Good news, everyone says

    The internet can be rebuilt from the ground up.

    But if you think the lobbysists trying to redesign it would be a problem, you’ll love the bitchfight when they try to argue that all the others (but not themselves, it’s not their responsibility) should actually pay for the work to be done. And nothing will actually happen until someone agrees that it’s their responsibility – which will be about a billion years after never. 😉

    Hell, there was a campaign to move porn off to a “.xxx” TLD which was scuttled by the US government (for reasons that I still don’t understand) even though that would allow libraries and schools to trivially filter out large quantities of porn. If even that can’t make it through, how does anyone think that genuine change is going to occur?

  29. Computer scientists love to redesign things from scratch. The trouble is that by the time you reach equivalent functionality to the existing system, you end up having an equivalent amount of the horrible and bizarre compromises and misuses that made you want to rewrite the thing in the first place.

    Just look at the modern computing world. Modern operating systems are almost universally based around UNIX, which has a heritage going back decades. Efforts to write a new OS from scratch invariably either fail or end up incorporating the UNIX heritage. Successful new computer languages are so strongly based on the existing ones that it’s hard to tell the difference without squinting.

    In the end, this is a good thing. Reality teaches us more than theory. In the long term, evolution gets us there faster and better than revolution.

  30. With hindsight, no-one would ever have been allowed to do anything.

  31. It seems to me that the beauty and subsequent practicality of the internet is that it wasn’t over designed. The problem with designing something is that you’re likely to get just what you wanted to get. That sounds good, and would probably be good, at first. There’s a reason that what is probably the world’s most popular motorcycle is a at heart a little-changed 50-year old design (the Honda Dream, aka the “Honda Stepthrough”), even though there are far better motorcycles for almost any specific task.