April 24, 2014

avatar

MySpace Photos Leaked; Payback for Not Fixing Flaw?

Last week an anonymous person published a file containing half a million images, many of which had been gathered from private profiles on MySpace. This may be the most serious privacy breach yet at MySpace. Kevin Poulsen’s story at Wired News implies that the leak may have been deliberate payback for MySpace failing to fix the vulnerability that allowed the leaks.

“I think the greatest motivator was simply to prove that it could be done,” file creator “DMaul” says in an e-mail interview. “I made it public that I was saving these images. However, I am certain there are mischievous individuals using these hacks for nefarious purposes.”

The MySpace hole surfaced last fall, and it was quickly seized upon by the self-described pedophiles and ordinary voyeurs who used it, among other things, to target 14- and 15-year-old users who’d caught their eye online. A YouTube video showed how to use the bug to retrieve private profile photos. The bug also spawned a number of ad-supported sites that made it easy to retrieve photos. One such site reported more than 77,000 queries before MySpace closed the hole last Friday following Wired News’ report.

MySpace plugged a a href=”http://grownupgeek.blogspot.com/2006/08/myspace-closes-giant-security-hole.html”>similar security hole in August 2006 when it made the front page of Digg, four months after it surfaced.

The implication here, not quite stated, is that DMaul was trying to draw attention to the flaw in order to force MySpace to fix it. If this is what it took to get MySpace to fix the flaw, this story reflects very badly on MySpace.

Anyone who has discovered security flaws in commercial products knows that companies react to flaws in two distinct ways. Smart companies react constructively: they’re not happy about the flaws or the subsequent PR fallout, but they acknowledge the truth and work in their customers’ interest to fix problems promptly. Other companies deny problems and delay addressing them, treating security flaws solely as PR problems rather than real risks.

Smart companies have learned that a constructive response minimizes the long-run PR damage and, not coincidentally, protects customers. But some companies seem to lock themselves into the deny-delay strategy.

Now suppose you know that a company’s product has a flaw that is endangering its customers, and the company is denying and delaying. There is something you can do that will force them to fix the problem – you can arrange an attention-grabbing demonstration that will show customers (and the press) that the risk is real. All you have to do is exploit the flaw yourself, get a bunch of private data, and release it. Which is pretty much what DMaul did.

To be clear, I’m not endorsing this course of action. I’m just pointing out why someone might find it attractive despite the obvious ethical objections.

The really interesting aspect of Poulsen’s article is that he doesn’t quite connect the dots and say that DMaul meant to punish MySpace. But Poulsen is savvy enough that he probably wouldn’t have missed the implication either, and he could have written the article to avoid it had he wanted to. Maybe I’m reading too much into the article, but I can’t help suspecting that DMaul was trying to punish MySpace for its lax security.

avatar

New $2B Dutch Transport Card is Insecure

The new Dutch transit card system, on which $2 billion has been spent, was recently shown by researchers to be insecure. Three attacks have been announced by separate research groups. Let’s look at what went wrong and why.

The system, known as OV-chipkaart, uses contactless smart cards, a technology that allows small digital cards to communicate by radio over short distances (i.e. centimeters or inches) with reader devices. Riders would carry either a disposable paper card or a more permanent plastic card. Riders would “charge up” a card by making a payment, and the card would keep track of the remaining balance. The card would be swiped past the turnstile on entry and exit from the transport system, where a reader device would authenticate the card and cause the card to deduct the proper fare for each ride.

The disposable and plastic cards use different technologies. The disposable card, called Mifare Ultralight, is small, light, and inexpensive. The reusable plastic card, Mifare Classic, uses more sophisticated technologies.

The first attack, published in July 2007, came from Pieter Sieckerman and Maurits van der Schee of the University of Amsterdam, who found vulnerabilities in the Ultralight system. Their main attacks manipulated Ultralight cards, for example by “rewinding” a card to a previous state so it could be re-used. These attacks looked fixable by changing the system’s software, and Sieckerman and van der Schee described the necessary fixes. But it was also evident that a cleverly constructed counterfeit Ultralight card would be able to defeat the system in a manner that would be very difficult to defense.

The fundamental security problem with the disposable Ultralight card is that it doesn’t use cryptography, so the card cannot keep any secrets from an attacker. An attacker who can read a card (e.g., by using standard equipment to emulate a card reader) can know exactly what information is stored on the card, and therefore can make another device that will behave identically to the card. Except, of course, that the attacker’s device can always return itself to the “fully funded” state. Roel Verdult of Raboud University implemented this “cloning” attack and demonstrated it on Dutch television, leading to the recent uproar.

The plastic Mifare Classic card does use cryptography: legitimate cards contain secret keys that they use to authenticate themselves to readers. So attackers cannot straightforwardly clone a card. Mifare Classic was designed to use a secret encryption algorithm.

Karsten Nohl, “Starbug,” and Henryk Plötz announced an attack that involved opening up a Mifare Classic card and capturing a high-resolution image of the circuitry, which they then used to reverse-engineer the cryptographic algorithm. They didn’t publish the algorithm, but their work shows that a real attacker could get the algorithm too.

Unmasking of the algorithm should have been no problem, had the system been engineered well. Kerckhoffs’s Principle, one of the bedrock maxims of cryptography, says that security should never rely on keeping an algorithm secret. It’s okay to have a secret key, if the key is randomly chosen and can be changed when needed, but you should never bank on an algorithm remaining secret.

Unfortunately the designers of Mifare Classic did not follow this principle. Instead, they chose to combine a secret algorithm with a relatively short 48-bit key. This is a problem because once you know the algorithm it’s possible for an attacker to search the entire 48-bit key space, and therefore to forge cards, in a matter or days or weeks. With 48 key bits, there are only about 280 trillion possible keys, which sounds like a lot to the person on the street but isn’t much of a barrier to today’s computers.

Now the Dutch authorities have a mess on their hands. About $2 billion have been invested in this project, but serious fraud seems likely if it is deployed as designed. This kind of disaster would have been less likely had the design process been more open. Secrecy was not only an engineering mistake (violating Kerckhoffs’s Principle) but also a policy mistake, as it allowed the project to get so far along before independent analysts had a chance to critique it. A more open process, like the one the U.S. government used in choosing the Advanced Encryption Standard (AES) would have been safer. Governments seem to have a hard time understanding that openness can make you more secure.

avatar

Could Use-Based Broadband Pricing Help the Net Neutrality Debate?

Yesterday, thanks to a leaked memo, it came to light that Time Warner Cable intends to try out use-based broadband pricing on a few of its customers. It looks like the plan is for several tiers of use, with the heaviest users possibly paying overage charges on a per-byte basis. In confirming its plans to Reuters, Time Warner pointed out that its heaviest-using five percent of customers generate the majority of data traffic on the network, but still pay as though they were typical users. Under the new proposal, pricing would be based on the total amount of data transferred, rather than the peak throughput on a connection.

If the current, flattened pricing is based on what the connection is worth to a typical customer, who makes only limited use of the connection, then the heaviest five percent of users (let’s call them super-users as shorthand) are reaping a surplus. Bandwidth use might be highly elastic with respect to price, but I think it is also true that the super users do reap a great deal more benefit from their broadband connections than other users do – think of those who pioneer video consumption online, for example.

What happens when network operators fail to see this surplus? They have marginally less incentive to build out the network and drive down the unit cost of data transfer. If the pricing model changed so that network providers’ revenue remained the same in total but was based directly on how much the network is used, then the price would go down for the lightest users and up for the heaviest. If a tiered structure left prices the same for most users and raised them on the heaviest, operators’ total revenue would go up. In either case, networks would have an incentive to encourage innovative, high-bandwidth uses of their networks – regardless of what kind of use that is.

Gigi Sohn of Public Knowledge has come out in favor of Time Warner’s move on these and other grounds. It’s important to acknowledge that network operators still have familiar, monopolistic reasons to intervene against traffic that competes with phone service or cable. But under the current pricing structure, they’ve had a relatively strong argument to discriminate in favor of the traffic they can monetize, and against the traffic they can’t. By allowing them to monetize all traffic, a shift to use based pricing would weaken one of the most persuasive reasons network operators have to oppose net neutrality.

avatar

Clinton's Digital Policy

This is the second in our promised series summing up where the 2008 presidential candidates stand on digital technology issues. (See our first post, about Obama). This time,we’ll take a look at Hillary Clinton

Hillary has a platform plank on innovation. Much of it will be welcome news to the research community: She wants to up funding for basic research, and increase the number and size of NSF fellowships for graduate students in the sciences. Beyond urging more spending (which is, arguably, all too easy at this point in the process) she indicates her priorities by urging two shifts in how science funds are allocated. First, relative to their current slice of the federal research funding pie, she wants a disproportionate amount of the increase in funding to go the physical sciences and engineering. Second, she wants to “require that federal research agencies set aside at least 8% of their research budgets for discretionary funding of high-risk research.” Where the 8% figure comes from, and which research would count as “high risk,” I don’t know. Readers, can you help?

As far as specifically digital policy questions, she highlights just one: broadband. She supports “tax incentives to encourage broadband deployment in underserved areas,” as well as providing “financial support” for state, local, and municipal broadband initiatives. Government mandates designed to help the communications infrastructure of rural America keep pace with the rest of the country are an old theme, familiar in the telephone context as universal service requirements. That program taxes the telecommunications industry’s commercial activity, and uses the proceeds to fund deployment in areas where profit-seeking actors haven’t seen fit to expand. It’s politically popular in part because it serves the interests of less-populous states, which enjoy disproportionate importance in presidential politics.

On the larger question of subsidizing broadband deployment everywhere, the Clinton position outlined above strikes me, at its admittedly high level of vagueness, as being roughly on target. I’m politically rooted in the laissez-faire, free-market right, which tends to place a heavy burden of justification on government interventions in markets. In its strongest and most brittle form, the free-market creed can verge on naturalistic fallacy: For any proposed government program, the objection can be raised, “if that were really such a good idea, a private enterprise would be doing it already, and turning a profit.” It’s an argument that applies against government interventions as such, and that has often been used to oppose broadband subsidies. Broadband is attractive and valuable, and people like to buy it, the reasoning goes–so there’s no need to bother with tax-and-spend supports.

The more nuanced truth, acknowledged by thoughtful participants all across the debate, is that subsidies can be justified if but only if the market is failing in some way. In this case, the failure would be a positive externality: adding one more customer to the broadband Internet conveys benefits to so many different parties that network operators can’t possibly hope to collect payment from all of them.

The act of plugging someone in creates a new customer for online merchants, a present and future candidate for employment by a wide range of far-flung employers, a better-informed and more critical citizen, and a happier, better-entertained individual. To the extent that each of these benefits is enjoyed by the customer, they will come across as willingness to pay a higher price for broadband service. But to the extent that other parties derive these benefits, the added value that would be created by the broadband sale will not express itself as a heightened willingness to pay, on the part of the customer. If there were no friction at all, and perfect foreknowledge of consumer behavior, it’s a good bet that Amazon, for example, would be willing to chip in on individual broadband subscriptions of those who might not otherwise get connected but who, if they do connect, will become profitable Amazon customers. As things are, the cost of figuring out which third parties will benefit from which additional broadband connection is prohibitive; it may not even be possible to find this information ahead of time at any price because human behavior is too hard to predict.

That means there’s some amount of added benefit from broadband that is not captured on the private market – the price charged to broadband customers is higher than would be economically optimal. Policymakers, by intervening to put downward pressure on the price of broadband, could lead us into a world where the myriad potential benefits of digital technology come at us stronger and sooner than they otherwise might. Of course, they might also make a mess of things in any of a number of ways. But at least in principle, a broadband subsidy could and should be done well.

One other note on Hillary: Appearing on Meet the Press yesterday (transcript here), she weighed in on Internet-enabled transparency. It came up tangentially, when Tim Russert asked her to promise she wouldn’t repeat her husband’s surprise decision to pardon political allies over the objection of the Justice Department. The pardon process, Hillary maintained, should be made more transparent–and, she went on to say:

I want to have a much more transparent government, and I think we now have the tools to make that happen. You know, I said the other night at an event in New Hampshire, I want to have as much information about the way our government operates on the Internet so the people who pay for it, the taxpayers of America, can see that. I want to be sure that, you know, we actually have like agency blogs. I want people in all the government agencies to be communicating with people, you know, because for me, we’re now in an era–which didn’t exist before–where you can have instant access to information, and I want to see my government be more transparent.

This seems strongly redolent of the transparency thrust in Obama’s platform. If nothing else, it suggests that his focus on the issue may be helping pull the field into more explicit, more concrete support for the Internet as a tool of government transparency. Assuming that either Obama or Clinton becomes the nominee, November will offer at least one major-party presidential candidate who is on record supporting specific new uses of the Internet as a transparency tool.

avatar

Second Life Welcomes Bank Regulators

Linden Lab, the company that runs the popular virtual world Second Life, announced Tuesday that all in-world “banks” must now be registered with real-world banking regulators:

As of January 22, 2008, it will be prohibited to offer interest or any direct return on an investment (whether in L$ or other currency) from any object, such as an ATM, located in Second Life, without proof of an applicable government registration statement or financial institution charter. We’re implementing this policy after reviewing Resident complaints, banking activities, and the law, and we’re doing it to protect our Residents and the integrity of our economy.

This is a significant step. Thus far Second Life, like other virtual worlds, has tried to avoid entanglement with heavyweight real-world regulatory agencies. Now they are welcoming banking regulation. The reason is simple: unregulated “banks” were out of control.

Since the collapse of Ginko Financial in August 2007, Linden Lab has received complaints about several in-world “banks” defaulting on their promises. These banks often promise unusually high rates of L$ return, reaching 20, 40, or even 60 percent annualized.

Usually, we don’t step in the middle of Resident-to-Resident conduct – letting Residents decide how to act, live, or play in Second Life.

But these “banks” have brought unique and substantial risks to Second Life, and we feel it’s our duty to step in. Offering unsustainably high interest rates, they are in most cases doomed to collapse – leaving upset “depositors” with nothing to show for their investments. As these activities grow, they become more likely to lead to destabilization of the virtual economy. At least as important, the legal and regulatory framework of these non-chartered, unregistered banks is unclear, i.e., what their duties are when they offer “interest” or “investments.”

This was inevitable, given the ever-growing connections between the virtual economy of Second Life and the real-world economy. In-world Linden Dollars are exchangeable for real-world dollars, so financial crime in Second Life can make you rich in the real world. Linden doesn’t have the processes in place to license “banks” or investigate problems. Nor does it have the enforcement muscle to put bad guys in jail.

Expect this trend to continue. As virtual world “games” are played for higher and higher stakes, the regulatory power of national governments will look more and more necessary.

avatar

Scoble/Facebook Incident: It's Not About Data Ownership

Last week Facebook canceled, and then reinstated, Robert Scoble’s account because he was using an automated script to export information about his Facebook friends to another service. The incident triggered a vigorous debate about who was in the right. Should Scoble be allowed to export this data from Facebook in the way he did? Should Facebook be allowed to control how the data is presented and used? What about the interests of Scoble’s friends?

An interesting meme kept popping up in this debate: the idea that somebody owns the data. Kara Swisher says the data belong to Scoble:

Thus, [Facebook] has zero interest in allowing people to escape easily if they want to, even though THE INFORMATION ON FACEBOOK IS THEIRS AND NOT FACEBOOK’S.

Sorry for the caps, but I wanted to be as clear as I could: All that information on Facebook is Robert Scoble’s. So, he should–even if he agreed to give away his rights to move it to use the service in the first place (he had no other choice if he wanted to join)–be allowed to move it wherever he wants.

Nick Carr disagrees, saying the data belong to Scoble’s friends:

Now, if you happen to be one of those “friends,” would you think of your name, email address, and birthday as being “Scoble’s data” or as being “my data.” If you’re smart, you’ll think of it as being “my data,” and you’ll be very nervous about the ability of someone to easily suck it out of Facebook’s database and move it into another database without your knowledge or permission. After all, if someone has your name, email address, and birthday, they pretty much have your identity – not just your online identity, but your real-world identity.

Scott Karp asks whether “Facebook actually own your data because you agreed to that ownership in the Terms of Service.” And Louis Gray titles his post “The Data Ownership Wars Are Heating Up”.

Where did we get this idea that facts about the world must be owned by somebody? Stop and consider that question for a minute, and you’ll see that ownership is a lousy way to think about this issue. In fact, much of the confusion we see stems from the unexamined assumption that the facts in question are owned.

It’s worth noting, too, that even today’s expansive intellectual property regimes don’t apply to the data at issue here. Facts aren’t copyrightable; there’s no trade secret here; and this information is outside the subject matter of patents and trademarks.

Once we give up the idea that the fact of Robert Scoble’s friendship with (say) Lee Aase, or the fact that that friendship has been memorialized on Facebook, has to be somebody’s exclusive property, we can see things more clearly. Scoble and Aase both have an interest in the facts of their Facebook-friendship and their real friendship (if any). Facebook has an interest in how its computer systems are used, but Scoble and Aase also have an interest in being able to access Facebook’s systems. Even you and I have an interest here, though probably not so strong as the others, in knowing whether Scoble and Aase are Facebook-friends.

How can all of these interests best be balanced in principle? What rights do Scoble, Aase, and Facebook have under existing law? What should public policy says about data access? All of these are difficult questions whose answers we should debate. Declaring these facts to be property doesn’t resolve the debate – all it does is rule out solutions that might turn out to be the best.

avatar

2008 Predictions

Here are the official Freedom to Tinker predictions for 2008, based on input by Alex Halderman, David Robinson, Dan Wallach, and me.

(1) DRM technology will still fail to prevent widespread infringement. In a related development, pigs will still fail to fly.

(2) Copyright issues will still be gridlocked in Congress.

(3) No patent reform bill will be passed. Baby steps toward a deal between the infotech and biotech industries won’t lead anywhere.

(4) DRM-free sales will become standard in the music business. The movie studios will flirt with the idea of DRM-free sales but won’t take the plunge, yet.

(5) The 2008 elections will not see an e-voting meltdown of Florida 2000 proportions, but a bevy of smaller problems will be reported, further fueling the trend toward reform.

(6) E-voting lawsuits will abound, with voters suing officials, officials suing other officials, and officials suing vendors (or vice versa).

(7) Second Life will jump the shark and the cool kids will start moving elsewhere; but virtual worlds generally will lumber on.

(8) MySpace will begin its long decline, losing customers for the first time.

(9) The trend toward open cellular data networks will continue, but not as quickly as optimists had hoped.

(10) If a Democrat wins the White House, we’ll hear talk about reinvigorated antitrust enforcement in the tech industries. (But of course it will all be talk, as the new administration won’t take office until 2009.)

(11) A Facebook application will cause a big privacy to-do.

(12) There will be calls for legislation to create a sort of Web 2.0 user’s bill of rights, giving users rights to access and extract information held by sites; but no action will be taken.

(13) An epidemic of news stories about teenage webcam exhibitionism will lead to calls for regulation.

(14) Somebody will get Skype or a similar VoIP client running on an Apple iPhone and it will, at least initially, operate over AT&T’s cellular phone network. AT&T and/or Apple will go out of their way to break this, either by filtering the network traffic or by locking down the iPhone.

Feel free to offer your own predictions in the comments.

avatar

New York Times Magazine on e-voting

This Sunday’s New York Times Magazine has an article by Clive Thompson on electronic voting machines. Freedom to Tinker‘s Ed Felten is briefly quoted, as are a small handful of other experts. The article is a reasonable summary of where we are today, with paperless electronic voting systems on a downswing and optical scan paper ballots gaining in popularity. The article even conveys the importance of open source and the broader importance of transparency, i.e., convincing the loser that he or she legitimately lost the election.

A few points in the article are worth clarifying. For starters, Pennsylvania is cited as the “next Florida” – a swing state using paperless electronic voting systems whose electoral votes could well be decisive toward the 2008 presidential election. In other words, Pennsylvania has the perfect recipe to cause electoral chaos this November. Pennsylvania presently bans paper-trail attachments to voting systems. While it’s not necessarily too late to reverse this decision, Pennsylvania’s examiner for electronic voting systems, Michael Shamos, has often (and rightly) criticized these continuous paper-tape systems for their ability to compromise voters’ anonymity. Furthermore, the article cites evidence from Ohio where a claimed 20 percent of these things jammed, presumably without voters noticing and complaining. This is also consistent with a recent PhD thesis by Sarah Everett, where she used a homemade electronic voting system that would insert deliberate errors into the summary screen. About two thirds of her test subjects never noticed the errors and, amazingly enough, gave the system extremely high subjective marks. If voters don’t notice errors on a summary screen, then it’s reasonable to suppose that voters would be similarly unlikely to notice errors on a printout.

Rather than adding a bad paper-tape printer, the article explains that hand-marked optical tabulated ballots are presently seen as the best available voting technology. For technologies presently on the market and certified for use, this is definitely the case. A variety of assistive devices exist to help voters with low-vision, zero-vision, and other issues, although there’s plenty of room for improvement on that score.

Unfortunately, optical scanners, themselves, have their own security problems. For example, the Hart InterCivic eScan (a precinct-based optical scanner) has an Ethernet port on the back, and you can pretty much just jack in and send it arbitrary commands that can extract or rewrite the firmware and/or recorded votes. This year’s studies from California and Ohio found a variety of related issues. [I was part of the source code review team for the California study of Hart InterCivic.] The only short-term solution to compensate for these design flaws is to manually audit the results. This is probably the biggest issue glossed over in the article: when you have an electronic tabulation system, you must also have a non-electronic auditing procedure to verify the correctness of the electronic tabulation. This is best done by randomly sampling the ballots by hand and statistically comparing them to the official totals. In tight races, you sample more ballots to increase your confidence. Rep. Rush Holt’s bill, which has yet to come up for a vote, would require this nationwide, but it’s something that any state or county could and should institute on its own.

Lastly, the article has a fair amount of discussion of the Sarasota fiasco in November 2006, where roughly one in seven votes that were cast electronically were recorded as “undervotes” in the Congressional race, while far fewer undervotes were recorded in other races on the same ballot. If you do any sort of statistical projection to replace even a fraction of those undervotes with the observed ratios of cast votes, then the Congressional race would have had a different winner. [I worked as an expert for the Jennings campaign in the Sarasota case. David Dill and I wrote a detailed report on the Sarasota undervote issue. It is our opinion that there is not presently any definitive explanation for the causes of Sarasota's undervote rate and a lot of analysis still needs to be performed.]

There are three theories raised in the article to explain Sarasota’s undervote anomaly: deliberate abstention (voters deliberately choosing to leave the race blank), human factors (voters being confused by the layout of the page), and malfunctioning machines. The article offers no support for the abstention theory beyond the assertions of Kathy Dent, the Sarasota County election supervisor, and ES&S, Sarasota’s equipment vendor (neither of whom have ever offered any support for these assertions). Dan Rather Reports covered many of the issues that could lead to machine malfunction, including poor quality control in manufacturing. To support the human factors theory, the article only refers to “early results from a separate test by an MIT professor”, but the professor in question, Ted Selker, has never published these results. The only details I’ve ever been able to find about his experiments is this quote from a Sarasota Herald-Tribune article:

On Tuesday [November 14, 2006], Selker set up a computer with a dummy version of the Sarasota ballot at the Boston Museum of Science to test the extent of the ballot design problems.

Twenty people cast fake ballots and two people missed the District 13 race. But the experiment was hastily designed and had too few participants to draw any conclusion, Selker said.

Needless to say, that’s not enough experimental evidence to support a usefully quantitative conclusion. The article also quotes Michael Shamos with some very specific numbers:

It’s difficult to say how often votes have genuinely gone astray. Michael Shamos, a computer scientist at Carnegie Mellon University who has examined voting-machine systems for more than 25 years, estimates that about 10 percent of the touch-screen machines “fail” in each election. “In general, those failures result in the loss of zero or one vote,” he told me. “But they’re very disturbing to the public.”

I would love to know where he got those numbers, since many real elections, such as the Sarasota election, seem to have yielded far larger problem rates.

For the record, it’s worth pointing out that Jennings has never conceded the election. Instead, after Florida’s courts decided to deny her motions for expert discovery (i.e., she asked the court to let her experts have a closer look at the voting machines and the court said “no”), Jennings has instead moved her complaint to the Committee on House Administration. Technically, Congress is responsible for seating its own members and can overturn a local election result. The committee has asked the Governmental Accountability Office to investigate further. They’re still working on it. Meanwhile, Jennings is preparing to run again in 2008.

In summary, the NYT Magazine article did a reasonable job of conveying the high points of the electronic voting controversy. There will be no surprises for anybody who follows the issue closely, and there are a only few places where the article conveys “facts” that are “truthy” without necessarily being true. If you want to get somebody up to speed on the electronic voting issue, this article makes a fine starting place.

[Irony sidebar: in the same election where Jennings lost due to the undervote anomaly, a voter initiative appeared on the ballot that would require the county to replace its touchscreen voting systems with paper ballots. That initiative passed.]

avatar

2007 Predictions Scorecard

As usual, we’ll start the new year by reviewing the predictions we made for the previous year. Here now, our 2007 predictions, in italics, with hindsight in ordinary type.

(1) DRM technology will still fail to prevent widespread infringement. In a related development, pigs will still fail to fly.

We predict this every year, and it’s always right. This prediction is so obvious that it’s almost unfair to count it. Verdict: right.

(2) An easy tool for cloning MySpace pages will show up, and young users will educate each other loudly about the evils of plagiarism.

This didn’t happen. Anyway, MySpace seems less relevant now than it did a year ago. Verdict: wrong.

(3) Despite the ascent of Howard Berman (D-Hollywood) to the chair of the House IP subcommittee, copyright issues will remain stalemated in Congress.

As predicted, not much happened in Congress on the copyright front. As usual, some bad bills were proposed, but none came close to passage. Verdict: right.

(4) Like the Republicans before them, the Democrats’ tech policy will disappoint. <ionly a few incumbent companies will be happy.

Very little changed. For the most part, tech policy issues do not break down neatly along party lines. Verdict: right.

(5) Major record companies will sell a significant number of MP3s, promoting them as compatible with everything. Movie studios won’t be ready to follow suit, persisting in their unsuccessful DRM strategy.

Two of the four major record companies now sell MP3s, and a third announced it will soon start. I haven’t seen sales statistics, but given that Amazon’s store sells only MP3s, sales can’t be too low. As predicted, movie studies are still betting on DRM. Verdict: right.

(6) Somebody will figure out the right way to sell and place video ads online, and will get very rich in the process. (We don’t know how they’ll do it. If we did, we wouldn’t be spending our time writing this blog.)

This didn’t happen. Verdict: wrong.

(7) Some mainstream TV shows will be built to facilitate YouTubing, for example by structuring a show as a series of separable nine-minute segments.

I thought this was a clever prediction, but it didn’t happen. The biggest news in commercial TV this year was the writers’ strike. Verdict: wrong.

(8) AACS, the encryption system for next-gen DVDs, will melt down and become as ineffectual as the CSS system used on ordinary DVDs.

AACS was defeated and you can now buy commercial software that circumvents it. Verdict: right.

(9) Congress will pass a national law regarding data leaks. It will be a watered-down version of the California law, and will preempt state laws.

There was talk about doing this but no bill was passed. Verdict: wrong.

(10) A worm infection will spread on game consoles.

To my knowledge this didn’t happen. It’s a good thing, too, because the closed nature of many game consoles would make a successful worm infection particularly challenging to stamp out. Verdict: wrong.

(11) There will be less attention to e-voting as the 2008 election seems far away and the public assumes progress is being made. The Holt e-voting bill will pass, ratifying the now-solid public consensus in favor of paper trails.

Attention to e-voting was down a bit. Despite widespread public unhappiness with paperless voting, the Holt bill did not pass, mostly due to pushback from state and local officials. Rep. Holt is reportedly readying a more limited bill for introduction in January. Verdict: mostly wrong.

(12) Bogus airport security procedures will peak and start to decrease.

Bogus procedures may or may not have peaked, but I didn’t see any decrease. Verdict: unclear.

(13) On cellphones, software products will increasingly compete independent of hardware.

There was a modest growth of third-party software applications for cellphones, including some cross-platform applications. But there was less of this than we predicted. Verdict: mostly wrong.

Our overall score: five right, two mostly wrong, five wrong, one unclear. Next: our predictions for 2008.