December 12, 2024

Bandwidth Needs and Engineering Tradeoffs

Tom Lee wonders about a question that Ed has pondered in the past: how much bandwidth does one human being need?

I’m suspicious of estimates of exploding per capita bandwidth consumption. Yes, our bandwidth needs will continue to increase. But the human nervous system has its own bandwidth limits, too. Maybe there’ll be one more video resolution revolution — HDTV2, let’s say (pending the invention of a more confusing acronym). But to go beyond that will require video walls — they look cool in Total Recall, but why would you pay for something larger than your field of view? — or three-dimensional holo-whatnots. I’m sure the latter will be popularized eventually, but I’ll probably be pretty old and confused by then.

The human fovea has a finite number of neurons, and we’re already pretty good at keeping them busy. Personally, I think that household bandwidth use is likely to level off sometime in the next decade or two — there’s only so much data that a human body can use. Our bandwidth expenses as a percentage of income will then start to fall, both because the growth in demand has slowed and because income continues to rise, but also because the resource itself will continue to get cheaper as technology improves.

When thinking about this question, I think it’s important to remember that engineering is all about trade-offs. It’s often possible to substitute one kind of computing resource for another. For example, compression replaces bandwidth or storage with increased computation. Similarly, caching substitutes storage for bandwidth. We recently had a talk by Vivek Pai, a researcher here at Princeton who has been using aggressive caching algorithms to improve the quality of Internet access in parts of Africa where bandwidth is scarce.

So even if we reach the point where our broadband connections are fat enough to bring in as much information as the human nervous system can process, that doesn’t mean that more bandwidth wouldn’t continue to be valuable. Higher bandwidth means more flexibility in the design of online applications. In some cases, it might make more sense to bring raw data into the home and do calculations locally. In other cases, it might make more sense to pre-render data on a server farm and bring the finished image into the home.

One key issue is latency. People with cable or satellite TV service are used to near-instantaneous, flawless video content, which is difficult to stream reliably over a packet-switched network. So the television of the future is likely to be a peer-to-peer client that downloads anything it thinks its owner might want to see and caches it for later viewing. This isn’t strictly necessary, but it would improve the user experience. Likewise, there may be circumstances where users want to quickly load up their portable devices with several gigabytes of data for later offline viewing.

Finally, and probably most importantly, higher bandwidth allows us to economize on the time of the engineers building online applications. One of the consistent trends in the computer industry has been towards greater abstraction. There was a time when everyone wrote software in machine language. Now, a lot of software is written in high-level languages like Java, Perl, or Python that run slower but make life a lot easier for programmers. A decade ago, people trying to build rich web applications had to waste a lot of time optimizing their web applications to achieve acceptable performance on the slow hardware of the day. Today, computers are fast enough that developers can use high-level frameworks that are much more powerful but consume a lot more resources. Developers spend more time adding new features and less time trying to squeeze better performance out of the features they already have. Which means users get more and better applications.

The same principle is likely to apply to increased bandwidth, even beyond the point where we all have enough bandwidth to stream high-def video. Right now, web developers need to pay a fair amount of attention to whether data is stored on the client or the server and how to efficiently transmit it from one place to another. A world of abundant bandwidth will allow developers to do whatever makes the most sense computationally without worrying about the bandwidth constraints. Of course, I don’t know exactly what those frameworks will look like or what applications they will enable, but I don’t think it’s too much of a stretch to think that we’ll be able to continue finding uses for higher bandwidth for a long time.

Comments

  1. One key issue is latency. People with cable or satellite TV service are used to near-instantaneous, flawless video content, which is difficult to stream reliably over a packet-switched network.

    So a satellite TV network reruns the movie “Tron” and the latency is 250 millisecond satellite time-of-flight, plus buffering at both ends (probably another 250 millisecond), plus approx 25 years since the movie was made. No one seems overly bothered by latency in this situation. Packet switching can handle it just fine. This is especially true because most satellite TV networks don’t even offer movies on demand — you can only watch the movie at the scheduled time.

    Latency only matters for interactive content like a video phone conversation where new content is coming from both sides. Online gamers always bitch about latency because the person with the faster link gets an advantage in the game.

    There is a different fundamental problem with packet switched networks which is that you don’t really know how much bandwidth you have available. Thus, you have to either buy much more than you need, or you have to figure out a way of getting QoS priority for your chosen real-time applications. From an engineering standpoint, QoS is a solved problem. We already know how to get arbitrary behaviour out of packet switched networks (up to the limit of what is actually possible on the given links). What we don’t have is the marketing and billing infrastructure to make QoS economically feasible.

    The problem is that having one single node in a network apply QoS is not terribly useful, you need the QoS over the entire path, and with the modern Internet that requires many different corporate entities to maintain this additional infrastructure (and they all want to get paid for that). Then we have Network Neutrality legislation trying to make QoS illegal, so infrastructure providers are even more reluctant to switch it on.

    The end result… it suits the billing model much better to simply sell people vastly more bandwidth than they need, in the hope that the worst case scenario is still good enough.

  2. On election night, CNN showed an application that eats a lot of bandwidth, holographic communication. Something like 35 or 45 HDTV cameras captured an image and transmitted it from Chicago the CNN broadcast central. That’s a lot of bandwidth.

    Not all modes of communication are one person to one person, even though the “end-to-end” demagogues have tried to make us think they are. Multicast applications which are either one-to-many or many-to-many have been developed and used on a small scale for a while, and will certainly become more popular as more bandwidth is available.

    So bandwidth is just like any other computer resource, the more you have the more you use. There is no natural limit based on the sensory inputs of one individual or otherwise; some new mode of communication or content delivery or interaction will consume whatever is there.

  3. Unfortunately, pretty much the opposite of what Clive Robinson suggests is now the norm — jut consider how much baggage is expended (not in absolute terms, of course) in the “live preview” function so many sites have for editing and comments.

    I shouldn’t be surprised if it were another generation or two of programmers before we get people who are used to the idea of real peer-to-peer computations on a network rather than models where one end of the connection or the other does precisely the wrong share of the work.

  4. Clive Robinson says

    Bandwidth will always be limited, that is like applications developers designing for 6month up the Moore curve on CPU’s etc, communications based applications will always be designed for current bandwidth + xx%.

    And latancy cannot be reduced beyond a certain point.

    The usual solution of caching is a right royal pain in the sit down area at the best of times due to most online applications not being static in nature.

    However in the same way “web2” downloads only the bits that change, a change to basic HTTP etc would make a considarable improvment not just to the volume of download but to the users experiance.

    That is if a page was considered to be made of independent objects each could be given an identifier and a hash. If the base download pulled down the hash then the users browser need only request the objects for which it does not have matching hashes.

    Likewise any caches along the way could supply the correct object if it matches the hash in it’s object store.

    Taken to a logical point of say a blog page, then updating would only pull down the comments you havenot already got. Simultaniously lessening the load on the server reducing bandwidth and improving the response to the user.

    With an additional field of expiry for the object as well it could automaticaly update the page you are looking at as well.

    Yes I know there are currently ways to do this sort of thing but they are all either optional or messy, when they should be neither.

  5. How much bandwidth one person can use is not just limited to streaming high quality video. If I had almost unlimited bandwidth and computational power then not only would I be streaming high quality video I would also stream some programs that I did not watch just so I can switch between programs without waiting for the program to buffer. I would have a lot of torrents going downloading and uploading lots of films, most of them I would never have the time to watch. I would also stream music 24 hours a day, and turning the streams off would be out of the question. I would just turn off the sound or monitor keeping the streaming going.

    I would use mesh computing and have a remote backup, so that everything I had on my hard disk at home would be copied to a remote location, and some of the computations will also have to be transmitted over the ned. I would run continual surfing in the background to hide my real surfing habits, I would run a Tor node and other software offering me and the rest of the internet anonymity at the cost of bandwidth.

    And if that was not enough, I would do everything using MPC (multiparty computations) with e.g. 10 nodes. Then every local multiplication of l-bits, would be turned into an algorithm that needed to send 100 l-bits numbers between each node.

    So I am guessing that any improvements in caching will be defeated by my careless use of bandwidth.

  6. maybe in the future we’ll want to load a lot of HD video on our iPhones quickly for mobile use.,/i>

    I think it’s simpler than that. That is, if we get more bandwidth, someone might well figure out how to do something with it that no one really thought of doing before. If you don’t have extra bandwidth, “too much” bandwidth, that person never thinks about it, and the unthought of thing never gets thought of, much less implemented. That’s how a lot of discoveries are done, especially the ones that turn out to be really important.

  7. …so all you’re really saying is that you think that it’ll take just a bit longer to reach that point… you agree with the point, you’re just haggling over the price.

  8. Also, in reply to this:

    “Why do professors’ desks have room for more than one book or periodical on them? Certainly you wouldn’t want to pay for space for more printed matter than you can read at one time…”

    Fair enough! But in the context of Tim’s analogy, that desk space is just used for caching — now that machines like the Kindle are viable, will the situation persist? To some extent I’m sure it will (and I don’t want to rehash the boring old “will there be books in the future” argument). But the amount of office space devoted to paper storage will no doubt be on the decline for some time.

    But back to TVs: I’m sure that video displays will continue to get larger for a while, but I suspect there will be a practical limit on what people want to have in their homes. We have Jetsons-style moving sidewalks, after all, but it’s only worth our while to put them in places where they’ll be particularly useful. Until the cost of additional screen real estate falls to near-zero, I don’t expect to see too many video walls… which is not to say that I’m not looking forward to being able to walk into my local hardware store and pick up a bucket of nanotech paint that self-assembles into an array of OLEDs.

    • “now that machines like the Kindle are viable, will the situation persist?”

      Oh, absolutely and then some. Look at all the multiple-monitor setups out there, or look at the work that Xerox Parc (surprise, surprise) did in the mid-late 80s on managing digital desktops the size of a real desktop.

      What it comes down to is that no amount of wire/display bandwidth is as fast as a saccade. I’ve been arguing for 15 years or so that the ideal use of E-paper would be in a pad you could tear off after it was written, so that your desk could be covered with all the different references you were working with, each open to the passage(s) you needed. I’d settle for a few dozen kindles if I had to.

      What I’m talking about is effectively trading the bandwidth cost of extreme caching for essentially zero latency to the eye, so it fits the general model pretty well. (And as for the current prefetch software in browsers, rendering and display speeds aren’t nearly good enough. Even if the information itself is all there, it typically takes at least second or two to display the new link, and the old human-interface rules about consisten sub-second response still apply. Once the illusion of seamlessness is broken, another few seconds doesn’t really matter, and may even be easier to get used to.)

  9. You make a great point, Tim, and it’s one that I hadn’t adequately considered. Certainly I’m putting myself on the wrong side of history by saying “users will never find a use for more technological capability!” 64k of RAM, etc.

    But I do think it’s unclear how this complex relationship will work out. Caching can be used as a substitute for bandwidth, yes. But as we get more bandwidth, which lets us do more caching, will we actually *need* that caching? We’ve got all that bandwidth, after all.

    Your portable player example is instructive: maybe in the future we’ll want to load a lot of HD video on our iPhones quickly for mobile use. Or maybe it’ll make more sense to stream that content wirelessly. It’s hard to say. In some ways the situation is analogous to the fat/thin client cycle, which promises to oscillate for years to come. My guess is that eventually there’ll be a limit on how much caching we consider useful, though.

    A potentially useful example — one that’s at least close to the “all the bandwidth you could want for this application” threshold — might be modern web browsers’ ability to pre-fetch content linked from a page prior to the link being followed. Many support this feature, but it’s not really taken off, and, to my knowledge, isn’t enabled on any of them by default. I think it’s seen by most as somewhat wasteful and, as bandwidth has improved, increasingly unnecessary.

  10. Reading Ed saying:

    But the human nervous system has its own bandwidth limits, too. Maybe there’ll be one more video resolution revolution — HDTV2, let’s say (pending the invention of a more confusing acronym). But to go beyond that will require video walls — they look cool in Total Recall, but why would you pay for something larger than your field of view?”

    I couldn’t help thinking, “Why do professors’ desks have room for more than one book or periodical on them? Certainly you wouldn’t want to pay for space for more printed matter than you can read at one time…”

    What you’re talking about seems like another kind of bandwidth/latency tradeoff, where you’re effectively using the higher bandwidth and computing capacity of the pipe to get better latency through the limited bandwidth of the human connection. It costs a lot of bandwidth (or a lot of caching) to have dozens or hundreds of HD streams ready for perusal whenever the eye lights on one of them, but it costs even more in the limited currency of human bandwidth to not have them all ready.

    It will be interesting to see how the bandwidth/computation/storage issues shake out; already, anyone who gets their video through a DVR or an internet connection is living (at least) a few seconds behind realtime. It will cost a lot of very specialized bandwidth to change that.