August 28, 2016

Archives for August 2007


Debate: Will Spam Get Worse?

This week I participated in Business Week Online’s Debate Room feature, where two people write short essays on opposite sides of a proposition.

The proposition: “Regardless of how hard IT experts work to intercept the trillions of junk e-mails that bombard hapless in-boxes, the spammers will find ways to defeat them.” I argued against, concluding that “We’ll never be totally free of spam, but in the long run it’s a nuisance—not a fundamental threat—to the flourishing of the Internet.”


Does Apple Object to iPhone Unlocking?

I wrote Monday about efforts to “unlock” the iPhone so it worked on non-AT&T cell networks, and the associated legal and policy issues. AT&T lawyers have aggressively tried to stop unlocking; but Apple has been pretty silent. What position will Apple take?

It might seem that Apple has nothing to lose from unlocking, but that’s not true. AT&T can exploit customer lock-in by charging higher prices, so it has an obvious incentive to stop unlocking. But AT&T also (reportedly) give Apple a cut of iPhone users’ fees, reportedly $3/month for existing AT&T users and $11/month for new users. This isn’t surprising – in exchange for creating the lock-in, Apple gets to keep a (presumably) hefty share of the resulting revenue.

Apple’s incentive is much like AT&T’s. Apple makes more money from iPhone customers who use AT&T than from those who use other cell providers, so Apple gains by driving customers to AT&T. And it’s not pocket change – Apple gets roughly $150 per user – so even though Apple gets money for selling iPhones to non-AT&T users, they get considerably more if they can drive those users to AT&T.

Thus far, Apple seems happy to let AT&T take the blame for intimidating the unlockers. This mirrors Apple’s game plan regarding music copy-protection, where it gestures toward openness and blames the record companies for requiring restrictive technology. If this works, Apple gets the benefit of lock-in but AT&T gets the blame.

From Apple’s standpoint, an even better result might be to have iPhone unlocking be fairly painful and expensive, but not impossible. Then customers who are allergic to AT&T would still buy iPhones, but almost everybody else would stick with AT&T. So Apple would win both ways, selling iPhones to everybody while preserving its AT&T payments.

What a clever Jobsian trick – using a business model based on restriction, while planting the blame on somebody else.


iPhone Unlocked; Legal Battle Looming?

In the past few days several groups declared victory in the battle to unlock the iPhone – to make the iPhone work on cellular networks other than AT&T’s. New Jersey teenager George Hotz published instructions (starting here) for a geeks-only unlock procedure involving hardware and software tweaks. An anonymous group called iPhoneSimFree reportedly has an easy all-software unlock procedure which they plan to sell. And a company called UniquePhones was set to sell a remote unlocking service.

(Technical background: The iPhone as initially sold worked only on the AT&T cell network – the device was pretty much useless until you activated AT&T wireless service on it. People figured out quickly that you could immediately cancel the wireless service to get an iPhone that worked only via WiFi; but you couldn’t use it on any other mobile phone/data network. This was not a fundamental technical limitation of the device, but was instead a technological tie designed by Apple to drive business to AT&T.)

Unlocking the iPhone helps everybody, except AT&T, which would prefer not to face competition in selling wireless services to iPhone users. So AT&T, predictably, seem to be sending its lawyers after the unlockers. UniquePhone, via their site, reports incoming lawyergrams from AT&T regarding “issues such as copyright infringement and illegal software dissemination”; UniquePhones has delayed its product release to consider its options. The iPhoneSimFree members are reportedly keeping anonymous because of legal concerns.

Can AT&T cook up a legal theory justifying a ban on iPhone unlocking? I’ll leave that question to the lawyers. It seems to me, though, that regardless of what the law does say, it ought to say that iPhone unlocking is fine. For starters, the law should hesitate to micromanage what people do with the devices they own. If you want to run different software on your phone, or if you want to use one cell provider rather than another, why should the government interfere?

I’ll grant that AT&T would prefer that you buy their service. Exxon would prefer that you be required to buy gasoline from them, but the government (rightly) doesn’t try to stop you from filling up elsewhere. The question is not what benefits AT&T or Exxon, but what benefits society as a whole. And the strong presumption is that letting the free market operate – letting customers decide which product to buy – is the best and most efficient policy. Absent some compelling argument that iPhone lock-in is actually necessary for the market to operate efficiently, government should let customers choose their cell operator. Indeed, government policy already tries to foster choice of carriers, for example by requiring phone number portability.

Regardless of what AT&T does, its effort to stop iPhone unlocking is likely doomed. Unlocking software is small and easily transmitted. AT&T’s lawyers can stick a few fingers in the dike, but they won’t be able to stop the unlocking software from getting to people who want it. This is yet another illustration that you can’t lock people out of their own digital devices.


Why Was Skype Offline?

Last week Skype, the popular, free Net telephony service, was unavailable for a day or two due to technical problems. Failures of big systems are always interesting and this is no exception.

We have only limited information about what went wrong. Skype said very little at first but is now opening up a little. Based on their description, it appears that the self-organization mechanism in Skype’s peer-to-peer network became unstable. Let’s unpack that to understand what it means, and what it can tell us about systems like this.

One of the surprising facts about big information systems is that the sheer scale of a system changes the engineering problems you face. When a system grows from small to large, the existing problems naturally get harder. But you also see entirely new problems that didn’t even exist at small scale – and, worse yet, this will happen again and again as your system keeps growing.

Skype uses a peer-to-peer organization, in which the traffic flows through ordinary users’ computers rather than being routed through a set of central servers managed by Skype itself. The advantage of exploiting users’ computers is that they’re available at no cost and, conveniently, there are more of them to exploit when there are more users requesting service. The disadvantage is that users’ computers tend to reboot or go offline more than dedicated servers would.

To deal with the ever-changing population of user computers, Skype has to use a clever self-organization algorithm that allows the machines to organize themselves without relying (more than a tiny bit) on a central authority. Self-organization has two goals: (1) the system must respond quickly to changed conditions to get back into a good configuration soon, and (2) the system must maintain stability as conditions change. These two goals aren’t entirely contradictory, but they are at least in tension. Responding quickly to changes makes it difficult to maintain stability, and the system must be engineered to make this tradeoff wisely in a wide range of conditions. Getting this right in a huge P2P system like Skype is tricky.

Which brings us to the story of last week’s failure, as described by Skype. On Tuesday August 14, Microsoft released a new set of patches to Windows, according to their normal monthly cycle. Many Windows machines downloaded the patch, installed it, and then rebooted. Each such machine would leave the Skype network when it shut down, then rejoin after booting. So the effect of Microsoft’s patch release was to increase the turnover in Skype’s network.

The result, Skype says, is that the network became unstable as the respond-quickly mechanism outran the maintain-stability mechanism; and the problem snowballed as the growing instability caused ever stronger (but poorly aimed) responses. The Skype service was essentially unavailable for a day or two starting on Thursday August 16, until the company could track down the problem and fix a code bug that it said contributed to the problem.

The biggest remaining mystery is why the problem took so long to develop. Microsoft issued the patch on Tuesday, and Skype didn’t get into deep trouble until Thursday. We can explain away some of the delay by noting that Windows machines might take up to a day to download the patch and reboot, but this still means it took Skype’s network at least a day to melt down. I’d love to know more about how this happened.

I would hesitate to draw too many broad conclusions from a single failure like this. Large systems of all kinds, whether centralized or P2P, must fight difficult stability problems. When a problem like this does occur, it’s a useful natural experiment in how large systems behave. I only hope Skype has more to say about what went wrong.


E-Voting Ballots Not Secret; Vendors Don't See Problem

Two Ohio researchers have discovered that some of the state’s e-voting machines put a timestamp on each ballot, which severely erodes the secrecy of ballots. The researchers, James Moyer and Jim Cropcho, used the state’s open records law to get access to ballot records, according to Declan McCullagh’s story at The pair say they have reconstructed the individual ballots for a county tax referendum in Delaware County, Ohio.

Timestamped ballots are a problem because polling-place procedures often record the time or sequence of voter’s arrivals. For example, at my polling place in New Jersey, each voter is given a sequence number which is recorded next to the voter’s name in the poll book records and is recorded in notebooks by Republican and Democratic poll watchers. If I’m the 74th voter using the machine today, and the recorded ballots on that machine are timestamped or kept in order, then anyone with access to the records can figure out how I voted. That, of course, violates the secret ballot and opens the door to coercion and vote-buying.

Most e-voting systems that have been examined get this wrong. In the recent California top-to-bottom review, researchers found that the Diebold system stores the ballots in the order they were cast and with timestamps (report pp. 49-50), and the Hart (report pp. 59) and Sequoia (report p. 64) systems “randomize” stored ballots in an easily reversible fashion. Add in the newly discovered ES&S system, and the vendors are 0-for-4 in protecting ballot secrecy.

You’d expect the vendors to hurry up and fix these problems, but instead they’re just shrugging them off.

An ES&S spokeswoman at the Fleishman-Hillard public relations firm downplayed concerns about vote linking. “It’s very difficult to make a direct correlation between the order of the sign-in and the timestamp in the unit,” said Jill Friedman-Wilson.

This is baloney. If you know the order of sign-ins, and you can put the ballots in order by timestamp, you’ll be able to connect them most of the time. You might make occasional mistakes, but that won’t reassure voters who want secrecy.

You know things are bad when questions about a technical matter like security are answered by a public-relations firm. Companies that respond constructively to security problems are those that see them not merely as a PR (public relations) problem but as a technology problem with PR implications. The constructive response in these situations is to say, “We take all security issues seriously and we’re investigating this report.”

Diebold, amazingly, claims that they don’t timestamp ballots – even though they do:

Other suppliers of electronic voting machines say they do not include time stamps in their products that provide voter-verified paper audit trails…. A spokesman for Diebold Election Systems (now Premier Election Solutions), said they don’t for security and privacy reasons: “We’re very sensitive to the integrity of the process.”

You have to wonder why e-voting vendors are so much worse at responding to security flaw reports than makers of other products. Most software vendors will admit problems when they’re real, will work constructively with the problems’ discoverers, and will issue patches promptly. Companies might try PR bluster once or twice, but they learn that bluster doesn’t work and they’re just driving away customers. The e-voting companies seem to make the same mistakes over and over.


OLPC Review Followup

Last week’s review of the One Laptop Per Child (OLPC) machine by twelve-year-old “SG” was one of our most-commented-upon posts ever. Today I want to follow up on a few items.

First, the machine I got for SG was the B2 (Beta 2) version of the OLPC system, which is not the latest. Folks from the OLPC project suggest that some of the problems SG found are fixed in the latest version. They have graciously offered to send an up to date OLPC machine for SG to review. SG has agreed to try out the new machine and review it here on Freedom to Tinker.

Second, I was intrigued by the back-and-forth in the comments over SG’s gender. I had originally planned to give SG a pseudonym that revealed SG’s gender, but a colleague suggested that I switch to a gender-neutral pseudonym. Most commenters didn’t seem to assume one gender or the other. A few assumed that SG is a boy, which generated some pushback from others who found that assumption sexist. My favorite comment in this series was from “Chris,” who wrote:

Why are you assuming the review was written by a boy?
At 12 we’re only two years from 8th grade level, the rumored grail (or natural default) of our national publications. SG, you’re clearly capable of writing for most any publication in this country, you go girl! (even if you are a boy)

Third, readers seem to be as impressed as I was by the quality of SG’s writing. Some found it hard to believe that a twelve-year-old could have written the post. But it was indeed SG’s work. I am assured that SG’s parents did not edit the post but only suggested in general terms the addition of a paragraph about what SG did with the machine. I suggested only one minor edit to preserve SG’s anonymity. Otherwise what you read is what SG wrote.

Though sentences like “My expectations for this computer were, I must admit, not very high.” seem unusual for a twelve-year-old, others show a kid’s point of view. One example: “Every time you hit a key, it provides a certain amount of satisfaction of how squishy and effortless it is. I just can’t get over that keyboard.”

SG is welcome to guest blog here in the future. Kids can do a lot, if we let them.

[Update (June 2012): I can reveal now that SG is a girl: my daughter Claire Felten.]


One Laptop Per Child, Reviewed by 12-Year-Old

[I recently got my hands on one of the One Laptop Per Child machines. I found the perfect person to review the machine. Today’s guest blogger, SG, is twelve years old and is the child of a close friend. I lent the laptop to SG and asked SG to write a review, which appears here just as SG wrote it, without any editing. –Ed]

[Update(June 2012): I can reveal now that SG is my daughter, Claire Felten.]

I’ve spent all of my life around computers and laptops. I’m only 12 years old though, so I’m not about to go off and start programming a computer to do my homework for me or anything. My parents use computers a lot, so I know about HTML and mother boards and stuff, but still I’m not exactly what you would call an expert. I just use the computer for essays, surfing the web, etc.

Over the last few days, I spent a lot of time on this laptop. I went on the program for typing documents, took silly pictures with the camera, went on the web, played the matching game, recorded my voice on the music-making application, and longed for someone to join me on the laptop-to-laptop messaging system. Here is what I discovered about the OLPC laptops:

My expectations for this computer were, I must admit, not very high. But it completely took me by surprise. It was cleverly designed, imaginative, straightforward, easy to understand (I was given no instructions on how to use it. It was just, “Here. Figure it out yourself.”), useful and simple, entertaining, dependable, really a “stick to the basics” kind of computer. It’s the perfect laptop for the job. Great for first time users, it sets the mood by offering a bunch of entertaining and easy games and a camera. It also has an application that allows you to type things. The space is a little limited, but the actual thing was great. It doesn’t have one of those impossible-to-read fonts but it was still nice. When the so-so connection allows you to get on, the internet is one of the best features of the whole computer. With a clever and space-saving toolbar, it is compact, well designed, accessible, and fast.

But, unfortunately, the internet is the only fast element of the computer. My main problem with this laptop is how very slow it is. It’s true that I am used to faster computers, but that’s not the problem. It’s just really slow. I had to wait two minutes to get onto one application. That’s just a little longer than I can accept. Also, it got slower and slower and slower the longer I went without rebooting it. I had to reboot it all the time. We’re talking once every two or three hours of use! And one of the most frustrating things about the system was that it gave no warning when it was out of power (as it was often because it lost charge very quickly) but just shut down. It doesn’t matter if you’re working on your autobiography and you had gotten all the way to the day before yesterday and forgotten to save it, it just shuts off and devours the whole thing.

This laptop is definitely designed for harsh conditions. Covered in a green and white hard plastic casing, it is designed not to break if dropped. It has a very nice handle for easy transportation and two antennas in plastic that can be easily put up. Once you open it, you see the screen (pretty high resolution) and my favorite part of the computer: the keyboard. It’s green rubber so that dust and water won’t get in under the keys, and this makes the keyboard an awesome thing to type on. Every time you hit a key, it provides a certain amount of satisfaction of how squishy and effortless it is. I just can’t get over that keyboard. There is also a button that changes the brightness of the screen. The other cool thing is that the screen is on a swiveling base, so you can turn it backwards then close it. This makes the laptop into just a screen with a handle.

All in all, this laptop is great for its price, its job, and its value. It is almost perfect. Just speed it up, give it a little more battery charge hold, and you have yourself the perfect laptop. I’m sure kids around the world will really love, enjoy, and cherish these laptops. They will be so useful. This program is truly amazing.


Sony-BMG Sues Maker of Bad DRM

Major record company Sony-BMG has sued the company that made some of the dangerous DRM (anti-copying) software that shipped on Sony-BMG compact discs back in 2005, according to an Antony Bruno story in Billboard.

Longtime Freedom to Tinker readers will remember that back in 2005 Sony-BMG shipped CDs that opened security holes and invaded privacy when inserted into Windows PCs. The CDs contained anti-copying software from two companies, SunnComm and First4Internet. The companies’ attempts to fix the problems only made things worse. Sony-BMG ultimately had to recall some of the discs, and faced civil suits and government investigations that were ultimately settled. The whole episode must have cost Sony-BMG many millions of dollars. (Alex Halderman and I wrote an academic paper about it.)

One of the most interesting questions about this debacle is who deserved the blame. SunnComm and First4Internet made the dangerous products, but Sony-BMG licensed them and distributed them to the public. It’s tempting to blame the vendors, but the fact that Sony-BMG shipped two separate dangerous products has to be part of the calculus too. There’s plenty of blame to go around.

As it turned out, Sony-BMG took most of the public heat and shouldered most of the financial responsibility. That was pretty much inevitable considering that Sony-BMG had the deepest pockets, was the entity that consumers knew, and had by far the most valuable brand name. The lawsuit looks like an attempt by Sony-BMG to recoup some of its losses.

The suit will frustrate SunnComm’s latest attempt to run from its past. SunnComm had renamed itself as Amergence Group and was trying to build a new corporate image as some kind of venture capitalist or start-up incubator. (This isn’t the first swerve in SunnComm’s direction – the company started out as a booking agency for Elvis impersonators. No, I’m not making that up.) The suit and subsequent publicity won’t help the company’s image any.

The suit itself will be interesting, if it goes ahead. We have long wondered exactly what Sony knew and when, as well as how the decision to deploy the dangerous technology was made. Discovery in the lawsuit will drag all of that out, though it will probably stay behind closed doors unless the case makes it to court. Sadly for the curious public, a settlement seems likely. SunnComm/Amergence almost certainly lacks the funds to fight this suit, or to pay the $12 million Sony-BMG is asking for.


On the emotions you feel when you do a security review

[I’m happy to introduce Dan Wallach, who will be blogging here from time to time. Dan is an Associate Professor of Computer Science at Rice University. He’s a leading security expert who has done great work on several topics, including e-voting. – Ed]

I was one of the co-authors of the Hart InterCivic source code report, as part of California’s “top to bottom” analysis of its voting systems. As many Freedom to Tinker readers now know, we found problems. Lots of problems. I’ve done this sort of thing before, as have many others, and I realized that there’s a somewhat odd emotion that we all feel when we do it. You’re happy because you found how to break something, but you’re sad that the system is so poorly engineered. It’s a great accomplishment that we were able to discover so much, but it’s terrible that widely used systems have such easily exploitable vulnerabilities. What word can describe that good/bad emotion?

About a year ago, I started asking everybody I knew, speakers of any language, if their language had a word to describe that emotion. Somebody, somewhere, must have such a word. There are lots of close-but-no-cigar choices, such as:

Schadenfreude (German) – the pleasure you feel at somebody else’s pain (common example: laughing at Hollywood celebrities arrested for drunk driving)

Bathos (Greek) – mixing serious issues with humor (a common literary device)

Neither quite capture it. Finally, in a discussion with my colleague, Moshe Vardi, we came up with a Yiddish coinage that seems to do the trick: oy gevaldik.

Origin? Oy vey is a standard Yiddish expression of woe (similar to “oh boy”). Oy gevalt is a stronger version of the same expression (similar to “oh expletive” for milder expletives). Curiously, the Yiddish word for beautiful is gevaldik, which sounds similar to gevalt. Put it together, and you get oy gevaldik. Oh, beautiful. And that’s what security reviews are all about.


More California E-Voting Reports Released; More Bad News

Yesterday the California Secretary of State released the reports of three source code study teams that analyzed the source code of e-voting systems from Diebold, Hart InterCivic, and Sequoia.

All three reports found many serious vulnerabilities. It seems likely that computer viruses could be constructed that could infect any of the three systems, spread between voting machines, and steal votes on the infected machines. All three systems use central tabulators (machines at election headquarters that accumulate ballots and report election results) that can be penetrated without great effort.

It’s hard to convey the magnitude of the problems in a short blog post. You really have read through the reports – the shortest one is 78 pages – to appreciate the sheer volume and diversity of severe vulnerabilities.

It is interesting (at least to me as a computer security guy) to see how often the three companies made similar mistakes. They misuse cryptography in the same ways: using fixed unchangeable keys, using ciphers in ECB mode, using a cyclic redundancy code for data integrity, and so on. Their central tabulators use poorly protected database software. Their code suffers from buffer overflows, integer overflow errors, and format string vulnerabilities. They store votes in a way that compromises the secret ballot.

Some of these are problems that the vendors claimed to have fixed years ago. For example, Diebold claimed (p. 11) in 2003 that its use of hard-coded passwords was “resolved in subsequent versions of the software”. Yet the current version still uses at least two hard-coded passwords – one is “diebold” (report, p. 46) and another is the eight-byte sequence 1,2,3,4,5,6,7,8 (report, p. 45).

Similarly, Diebold in 2003 ridiculed (p. 6) the idea that their software could suffer from buffer overflows: “Unlike a Web server or other Internet enabled applications, the code is not vulnerable to most ‘buffer overflow attacks’ to which the authors [Kohno et al.] refer. This form of attack is almost entirely inapplicable to our application. In the limited number of cases in which it would apply, we have taken the steps necessary to ensure correctness.” Yet the California source code study found several buffer overflow vulnerabilities in Diebold’s systems (e.g., issues 5.1.6, 5.2.3 (“multiple buffer overflows”), and 5.2.18 in the report).

As far as I can tell, major news outlets haven’t taken much notice of these reports. That in itself may be the most eloquent commentary on the state of e-voting: reports of huge security holes in e-voting systems are barely even newsworthy any more.