November 21, 2024

Archives for February 2007

Is there any such thing as “enough” technological progress?

Yesterday, Ed considered the idea that there may be “a point of diminishing returns where more capacity doesn’t improve the user’s happiness.” It’s a provocative concept, and one that I want to probe a bit further.

One observation that seems germane is that such thoughts have a pedigree. Henry L. Ellsworth, , in his 1843 report to Congress, wrote that “the advancement of the arts, from year to year, taxes our credulity and seems to presage the arrival of that period when human improvement must end.”

It seems to me that the idea of diminishing marginal returns is most at home in settings where the task or process under consideration has well-defined boundaries. For example, making steel: Larger steel mills, up to a point, are more efficient that smaller ones. Larger furnaces reduce capital costs per unit of output, and secondary functions like logistics, training and bookkeeping can be spanned across larger amounts of steel without commensurate increases in their cost. But consolidating an industry, and replacing small production facilities with a larger one, does not necessarily involve any fundamental advancement in the state of the art. (It may, of course.)

Innovation—which is the real wellspring of much of human progress—tends not to follow such predictable patterns. Science textbooks like to present sanitized stories of incremental, orderly advancement, but as Thomas Kuhn famously argued, history actually abounds with disjointed progress, serendipitous accidents, and unanticipated consequences, both good and bad.

There are areas in which incremental improvement is the norm: shaving razors, compression algorithms, mileage per gallon. But in each of these areas, the technology being advanced is task-specific. Nobody is going to use their car to shave or their Mach 3 to commute to the office.

But digital computers—Turing machines—are different. It’s an old saw that a digital computer can be used to change or analyze literally any information. When it comes to computers, advancement means faster Turing machines with larger memories, in smaller physical footprints and with lower costs (including, e.g., manufacturing expense and operational electricity needs).

Ed’s observation yesterday that there is an ultimate limit to the bandwidth leading into the human brain is well taken. But in terms of all transmission of digital content globally, the “last hop” from computer to human is already a very small part of the total traffic. Mostly, traffic is among nodes on end-to-end computer networks, among servers in a Beowulf cluster or similar setup, or even traffic among chips on a motherboard or cores in the same chip. Technologies that advance bandwidth capabilities are useful primarily because of the ways they change what computers can do (at the human time scale). The more they advance, the more things, and the more kinds of things, computers will be capable of. It’s very unlikely we’ve thought of them all.

It is also striking how far our capability to imagine new uses for digital technology has lagged behind the advancement of the technology itself. Blogs like this one were effectively possible from the dawn of the World Wide Web (or even before), and they now seem to be a significant part of what the web can most usefully be made to do. But it took years, after the relevant technologies were available, for people to recognize and take advantage of this possibility. Likewise, much of “web 2.0” has effectively meant harnessing relatively old technologies, such as Javascript, in new and patently unanticipated ways.

The literature of trying to imagine far-out implications of technological advancement is at once both exciting and discouraging: Exciting because it shows that much of what we can imagine probably will happen eventually, and discouraging because it shows that the future is full of major shifts, obvious in retrospect, to which we were blind up until their arrival.

I occasionally try my hand at the “big picture” prognostication game, and enjoy reading the efforts of others. But in the end I’m left feeling that the future, though bright, is mysterious. I can’t imagine a human community, even in the distant future, that has exhausted its every chance to create, innovate and improve its surroundings.

How Much Bandwidth is Enough?

It is a matter of faith among infotech experts that (1) the supply of computing and communications will increase rapidly according to Moore’s Law, and (2) the demand for that capacity will grow roughly as fast. This mutual escalation of supply and demand causes the rapid change we see in the industry.

It seems to be a law of physics that Moore’s Law must terminate eventually – there are fundamental physical limits to how much information can be stored, or how much computing accomplished in a second, within a fixed volume of space. But these hard limits may be a long way off, so it seems safe to assume that Moore’s Law will keep operating for many more cycles, as long as there is demand for ever-greater capacity.

Thus far, whenever more capacity comes along, new applications are invented (or made practical) to use it. But will this go on forever, or is there a point of diminishing returns where more capacity doesn’t improve the user’s happiness?

Consider the broadband link going into a typical home. Certainly today’s homeowner wants more bandwidth, or at least can put more bandwidth to use if it is provided. But at some point there is enough bandwidth to download any reasonable webpage or program in a split second, or to provide real-time ultra-high-def video streams to every member of the household. When that day comes, do home users actually benefit from having fatter pipes?

There is a plausible argument that a limit exists. The human sensory system has limited (though very high) bandwidth, so it doesn’t make sense to direct more than a certain number of bits per second at the user. At some point, your 3-D immersive stereo video has such high resolution that nobody will notice any improvement. The other senses have similar limits, so at some point you have enough bandwidth to saturate the senses of everybody in the home. You might want to send information to devices in the home; but how far can that grow?

Such questions may not matter quite yet, but they will matter a great deal someday. The structure of the technology industries, not to mention technology policies, are built around the idea that people will keep demanding more-more-more and the industry will be kept busy providing it.

My gut feeling is that we’ll eventually hit the point of diminishing returns, but it is a long way off. And I suspect we’ll hit the bandwidth limit before we hit the computation and storage limits. I am far from certain about this. What do you think?

(This post was inspired by a conversation with Tim Brown.)

SonyBMG (Accidentally?) Giving Away MP3 of New Billy Joel Song

Billy Joel’s new song, “All My Life” is being released in stages. Presently it’s available for free streaming from People Magazine’s site. Later in the month it will be available for purchase only at the iTunes Music store. After that it will be released in other online stores. Or at least that was the plan of the record company, SonyBMG.

As an anonymous reader points out, although the People site looks like it is streaming the song, thus giving users no easy way to copy it, what the site actually does is download a high-quality MP3 file (unencumbered by any copy protection) to the user’s computer, and then play the MP3. The MP3 is dropped in a place where ordinary users won’t stumble across it, but if you know where to look you’ll find it sitting on your computer after you listen to the “stream”. In other words, SonyBMG is, perhaps inadvertently, giving away high-quality MP3s of “All My Life.”

(Technical details, for those who care: The “streaming” control is actually a Flash object that downloads and plays an MP3. It uses the normal browser mechanism to do the downloading, which means that the browser (Firefox, at least) automatically squirrels away a copy of the downloaded file. Result: the MP3 file is left on the user’s system.)

The obvious question is why SonyBMG did this. It could be (1) a mistake by an engineer who didn’t realize that the canned music-player control he was using operated by downloading an MP3. Or perhaps (2) the engineer didn’t realize that the browser would keep a copy of the file. Or it could be that (3) SonyBMG knew about all of this and figured users wouldn’t notice, or (4) they figured that any user who could find the MP3 could capture an ordinary stream anyway. For what it’s worth, my money is on (2).

Apple Offers to Sell DRM-Free Music

The Net is buzzing with talk about the open letter posted by Apple CEO Steve Jobs yesterday. In an apparent reversal, Jobs offers to sell MP3 files, free of anti-copying DRM technology, on the iTunes Music Store if the major record companies allow it.

Much as I would like to see Apple renounce DRM entirely, that’s not quite what Jobs is saying. The letter describes three possible futures for Apple’s music technology: (1) continue the current path with a closed Apple-only DRM system; (2) license Apple’s DRM technology to other companies to build compatible systems; and (3) sell DRM-free music.

Apple’s preferred outcome, Jobs says, is outcome (3), selling DRM-free music. This is notable, and somewhat surprising, as the consensus has been that Apple strategy has been to seek outcome (1), using its proprietary DRM to lock customers in to its iTunes-iPod world. If Apple really prefers to eliminate DRM, that is news.

But this part of the letter might just be cheap talk. As Jobs points out in the letter, Apple sells music at the pleasure of the record companies. And if the record companies announce tomorrow that they don’t want Apple to use DRM, then Apple will have little choice but to smile and go along.

So there is little downside to Apple saying that they they willing to get rid of DRM. In this respect, Apple is like the kid who says he is willing to go to the dentist, because he knows that no matter what he says he’s going to see the dentist whenever his parents want him to.

The least-discussed aspect of the letter is its praise for the status quo (outcome (1)). Jobs says that the current system is working well:

The first alternative is to continue on the current course, with each manufacturer competing freely with their own “top to bottom” proprietary systems for selling, playing and protecting music. It is a very competitive market, with major global companies making large investments to develop new music players and online music stores. Apple, Microsoft and Sony all compete with proprietary systems. Music purchased from Microsoft’s Zune store will only play on Zune players; music purchased from Sony’s Connect store will only play on Sony’s players; and music purchased from Apple’s iTunes store will only play on iPods. This is the current state of affairs in the industry, and customers are being well served with a continuing stream of innovative products and a wide variety of choices.

His real scorn is for outcome (2), where Apple licenses its DRM technology to other companies. It’s easy to see why this is the worst outcome for Apple – the company loses its ability to lock in customers, but everybody still has to put up with the cost and hassle of using DRM.

What the letter really does, in typical Jobsian fashion, is frame the debate. It does this in two respects. First, it sets up a choice between two alternatives: stay the course, or get rid of DRM entirely. Second, it points the finger at the major record companies as the ones making the choice.

This is both a clever PR move and a proactive defense against European antitrust scrutiny. Mandatory licensing is a typical antitrust remedy in situations like this, so Apple wants to take licensing off the table as an option. Most of all, Apple wants to deflect the blame for the current situation onto the record companies. Steve Jobs is a genius at this sort of thing, and it looks like he will succeed again.

Sarasota: Limited Investigations

As I wrote last week, malfunctioning voting machines are one of the two plausible theories that could explain the mysterious undervotes in Sarasota’s congressional race. To get a better idea of whether malfunctions could be the culprit, we would have to investigate – to inspect the machines and their software for any relevant errors in design or operation. A well-functioning electoral system ought to be able to do such investigations in an open and thorough manner.

Two attempts have been made to investigate. The first was by representatives of Christine Jennings (the officially losing candidate) and a group of voters, who filed lawsuits challenging the election results and asked, as part of the suits’ discovery process, for access by their experts to the machines and their code. The judge denied their request, in a curious order that seemed to imply that they would first have to prove that there was probably a malfunction before they could be granted access to the evidence needed to tell whether there was a malfunction.

The second attempt was by the Department of State (DOS) of the state of Florida, who commissioned a study by outside experts. Oddly, I am listed in the official Statement of Work (SOW) as a principal investigator on the study team, even though I am not a member of the team. Many people have asked how this happened. The short answer is that I discussed with representatives of DOS the possibility of participating, but eventually it became clear that the study they wanted to commission was far from the complete, independent study I had initially thought they wanted.

The biggest limitation on the study is that DOS is withholding information and resources needed for a complete study. Most notably, they are not providing access to voting machines. You don’t have to be a rocket scientist to realize that if you want to understand the behavior of voting machines, it helps to have a voting machine to examine. DOS could have provided or facilitated access to a machine, but it apparently chose not to do so. [Correction (Feb. 28): The team’s final report revealed that DOS had changed its mind and given the team access to voting machines.] The Statement of Work is clear that the study is to be “a … static software analysis on the iVotronics version 8.0.1.2 firmware source code”.

(In computer science, “static” analysis of software refers to methods that examine the text of the software; “dynamic” methods observe and measure the software while it is running.)

The good news is that the team doing the study is very strong technically, so there is some hope of a useful result despite the limited scope of the inquiry. There have been some accusations of political bias against team members, but knowing several members of the team I am confident that these charges are misguided and the team won’t be swayed by partisan politics. The limits on the study aren’t coming from the team itself.

The results of the DOS-sponsored study should be published sometime in the next few months.

What we have not seen, and probably won’t, is a full, independent study of the iVotronic machines. The voters of Sarasota County – and everyone who votes on paperless machines – are entitled to a comprehensive study of what happened. Sadly, it looks like lawyers and politics will stop that from happening.