Yesterday, Ed considered the idea that there may be “a point of diminishing returns where more capacity doesn’t improve the user’s happiness.†It’s a provocative concept, and one that I want to probe a bit further.
One observation that seems germane is that such thoughts have a pedigree. Henry L. Ellsworth, , in his 1843 report to Congress, wrote that “the advancement of the arts, from year to year, taxes our credulity and seems to presage the arrival of that period when human improvement must end.â€
It seems to me that the idea of diminishing marginal returns is most at home in settings where the task or process under consideration has well-defined boundaries. For example, making steel: Larger steel mills, up to a point, are more efficient that smaller ones. Larger furnaces reduce capital costs per unit of output, and secondary functions like logistics, training and bookkeeping can be spanned across larger amounts of steel without commensurate increases in their cost. But consolidating an industry, and replacing small production facilities with a larger one, does not necessarily involve any fundamental advancement in the state of the art. (It may, of course.)
Innovation—which is the real wellspring of much of human progress—tends not to follow such predictable patterns. Science textbooks like to present sanitized stories of incremental, orderly advancement, but as Thomas Kuhn famously argued, history actually abounds with disjointed progress, serendipitous accidents, and unanticipated consequences, both good and bad.
There are areas in which incremental improvement is the norm: shaving razors, compression algorithms, mileage per gallon. But in each of these areas, the technology being advanced is task-specific. Nobody is going to use their car to shave or their Mach 3 to commute to the office.
But digital computers—Turing machines—are different. It’s an old saw that a digital computer can be used to change or analyze literally any information. When it comes to computers, advancement means faster Turing machines with larger memories, in smaller physical footprints and with lower costs (including, e.g., manufacturing expense and operational electricity needs).
Ed’s observation yesterday that there is an ultimate limit to the bandwidth leading into the human brain is well taken. But in terms of all transmission of digital content globally, the “last hop†from computer to human is already a very small part of the total traffic. Mostly, traffic is among nodes on end-to-end computer networks, among servers in a Beowulf cluster or similar setup, or even traffic among chips on a motherboard or cores in the same chip. Technologies that advance bandwidth capabilities are useful primarily because of the ways they change what computers can do (at the human time scale). The more they advance, the more things, and the more kinds of things, computers will be capable of. It’s very unlikely we’ve thought of them all.
It is also striking how far our capability to imagine new uses for digital technology has lagged behind the advancement of the technology itself. Blogs like this one were effectively possible from the dawn of the World Wide Web (or even before), and they now seem to be a significant part of what the web can most usefully be made to do. But it took years, after the relevant technologies were available, for people to recognize and take advantage of this possibility. Likewise, much of “web 2.0†has effectively meant harnessing relatively old technologies, such as Javascript, in new and patently unanticipated ways.
The literature of trying to imagine far-out implications of technological advancement is at once both exciting and discouraging: Exciting because it shows that much of what we can imagine probably will happen eventually, and discouraging because it shows that the future is full of major shifts, obvious in retrospect, to which we were blind up until their arrival.
I occasionally try my hand at the “big picture†prognostication game, and enjoy reading the efforts of others. But in the end I’m left feeling that the future, though bright, is mysterious. I can’t imagine a human community, even in the distant future, that has exhausted its every chance to create, innovate and improve its surroundings.
is there such thing as a censoray
Some years ago, I read a rather depressing book called “Why Things Bite Back”, by Edward Tenner. It was basically a long catalogue of instances of the Law of Unintended Consequences. It seems to me that, as we accumulate more and more technological progress, we also accumulate more and more side effects of that progress, which we then need to spend further effort and ingenuity on ameliorating. As time goes on, dealing with such unintended consequences could very well take an increasing proportion of our effort at technological progress. At some point in the future, it might get so close to 100% that we effectively spend all our energies just standing still.
🙂
tz,
it is interesting that you mention music. i’m not sure how relevant it is to the topic, but it’s fun 🙂
i really disagree with your comment about Schoenberg (confessing he is my favourite composer). Schoenberg not only gave the world new ways of saying things, he gave the world the opportunity to say new things. his techniques have produced some of the most emotionally charged music ever composed. i offer as an example Berg’s violin concerto.
i was recently discussing Penderecki’s Threnody to the Victims of Hiroshima in relation to Picasso’s Guernica. both point to events which traditional methods of production could not hope to communicate. as someone asked in that discussion: can you imagine a Brahms Threnody? equally, can you imagine a Delacroix Guernica?
more generally, there was massive innovation in music (and indeed all the art forms) during the 20thC. think about the plethora of -isms that proliferated.
your comment “modern sounds which are classified as music” points to the profound question ‘what is music?’, and that is an all too open question.
technological and artistic innovation have, for the most part, developed independantly of one another. there are some obvious (and massive!) exceptions, of course (printing press, sound recording).
to bring this post into some sort of relevance, i cannot imagine a composer creating super- or sub-audible music — there just isn’t a point to it, because the human ear creates a bottleneck which no amount of “bandwidth” in the sound-form can overcome.
we find a similar situation in printing. there is no point to innovating greater dpi’s than we have now because the human eye is the bottleneck.
Consider music. Instruments have had minimal technological improvement since the time of Bach, Mozart, and Beethoven, and you don’t see real innovation. I don’t think everything has been said, but there aren’t any new ways of saying it. (Of course there’s Schoenberg, which was a new way of NOT saying it, and some of the modern sounds which are classified as music although they are lacking in melody, harmony, and often even rhythm).
We now have synthesizers, but no composer I know of can compose a Bach fugue. Or a Mozart symphony. Or a Beethoven sonata. There is some nice music, but most of it starts to approach what was written in that century length period.
There is something parallel with computing, and more importantly programming (See “the quality plateau” from “the programmer’s stone”).
Will things come up that will “change everything”? No, but it might change something fundamental or very common. Really cheap and abundant steel created skyscrapers, yet they did medieval cathedrals out of stone.
We forget that the old Roman empire had lots of technology, as well as arts and sciences, and it disappeared. China over the millenia made a lot of discoveries which resulted in a museum like “that’s neat” but no progress as such.
And there may be a sensory/motor bandwidth limitation, but none such on the will and imagination.
Ed: But this specific thing is only an instance of the more generalized phenomenon. Anybody in their right mind will prefer a more advanced/superior over an older/inferior product, at a certain price point/range, at least in the situation where purchasing one versus the other. But for replacing/retooling a working “inferior” solution with a “superior” one, effectively discarding away the old working one and spending money to buy/integrate the new one, it better promise (1) a significant enough delta in utility, (2) for which there is a credible use (and preferrably need).
Aside from that, the remaining driving forces are branding, fashion, and peer pressure (arguably a variant of fashion) — avoiding the pitiful looks of Mr. and Mrs. Jones. You should see how in my office grown men who are fathers have been showing off their Blackberries and Razrs to each other. Last year large-screen TVs/projectors were the big floor talk, and with certainty one early question in every discussion was “how many inches” (if you get my drift).
And whatever driving forces are there, they define a “market”, and the question is can that market sustain the supplying industries? This is one of the first questions asked when evaluating a prospective business model/plan.
Look at consumer-model US laundry machines. Based on recent experience, open top-loading “passive” models with hot/cold hookups and no internal heating element are still doing great. I have to use one such model, and I consistently I have to let in “hot” water first until it finally warms up, and manually manage the water temperature lest my laundry sits in cold water. How ridiculous is that? Nevertheless this technology must be good enough, and it seems to work for my landlords. And they have a wireless network too.
Ed, mea culpa: My response did indeed react to a kind of broad skepticism about future advancement that was not present in your original post.
I do think, on the specific question of marginal returns for greater bandwidth to homes, that the diminishing marginal returns model is probably inapplicable. Advancement involving new kinds of usage scenarios seems important. We didn’t really need faster broadband for web browsing, but now that we have it we’re doing online video — something that didn’t just get a little better than it was before, because we were not doing it at all before. I doubt we can ever be confident that further improvements in bandwidth will show diminishing returns, because we can never rule out the possibility that some unforeseen and fundamentally novel usage possibility will be unlocked by the next increment of added bandwidth.
Also, I stand corrected, and chagrined on another point — of course a Turing Machine has an infinitely long tape. I suppose what I meant there was that actual implementations that emulate Turing Machines across actual human usage scenarios are getting longer tapes.
In my defense, I wasn’t forecasting a limit to innovation generally. I was only suggesting that there might be diminishing returns for bandwidth to the home.
Even if we magically achieved infinite bandwidth to every home, innovation in other areas would go on. Indeed, innovation would increase overall.
1. Computer-to-computer communication.
2. Bandwidth useful to a human can/will exceed the maximum she can consume,
think interactive processes, dealing with latency, eg. prefetching etc.
Also:
a. Computers are not Turing Machines, TM is a model, has infinite
memory by definition (the tape).
b. “End-to-end networks”, doesn’t work, End-to-end is a principle, not
a kind of a network.
There is the related concept of the “S curve”. It applies to particular technologies, but at some point to the whole class of technologies addressing increasingly broader problem areas, and also the evolution of problem areasas the need for paradigm shifts diminishes.
Of course, the S curve is a smooth abstraction describing a continuous process, whereas technology progresses in leaps. Nonetheless, it applies metaphorically.
Arguably, until now progressing technology has visibly improved important aspects of the human condition. These days I’m no longer sure that’s so clear-cut, or whether the most highly improved conditions (in the “first” Western world) are not actually deteriorating despite or concurrently with technological progress. Perhaps I’m just getting curmudgeonly.
1. Eventually the earth is going to become uninhabitable because of global warming, nuclear war, the sun going nova, or whatever. The only prospect for long term human survival is large-scale migration off planet, which needs a huge amount of technological progress over present technology.
2. The thing about the amount of useful bandwidth being finite because of the brain’s limited input bandwidth is silly. Think of the old saying about the bandwidth of a station wagon full of magtape. Now think of actually using information. Just because I download a lot of stuff doesn’t mean I use it all. I buy a printed dictionary with 100,000 definitions and maybe look up one word in it every few months, so I actually might look at a few hundred of the 100,000 definitions in my lifetime. But I don’t want to access individual definitions online; I want the entire dictionary in my home (a few megabytes) even if 99% is never referred to. It’s just bandwidth and storage limitations that stop me from downloading wikipedia (800GB or so including all the multimedia) and having a personal copy on my PC. Maybe I check a DVD (5 GB) out of the film library but that’s just because of the inconvenience/impossibility and illegality of downloading their whole film collection (100,000 movies or whatever) with one click. In general we have centralized collections of published info (i.e. libraries, bookstores etc) purely because of those storage, bandwidth, and legality issues. The centralization means accessing any specific piece of info depends on some communications infrastructure that’s suspectible to monitoring, censorship, etc. So from a pro-decentralization point of view (I’d rather have my own copy of the whole million volume library, than seek access to the few hundred individual books or films that I actually want to look at) my appetite for bandwidth and storage is basically limitless.