May 16, 2024

Software in dangerous places

Software increasingly manages the world around us, in subtle ways that are often hard to see. Software helps fly our airplanes (in some cases, particularly military fighter aircraft, software is the only thing keeping them in the air). Software manages our cars (fuel/air mixture, among other things). Software manages our electrical grid. And, closer to home for me, software runs our voting machines and manages our elections.

Sunday’s NY Times Magazine has an extended piece about faulty radiation delivery for cancer treatment. The article details two particular fault modes: procedural screwups and software bugs.

The procedural screwups (e.g., treating a patient with stomach cancer with a radiation plan intended for somebody else’s breast cancer) are heartbreaking because they’re something that could be completely eliminated through fairly simple mechanisms. How about putting barcodes on patient armbands that are read by the radiation machine? “Oops, you’re patient #103 and this radiation plan is loaded for patent #319.”

The software bugs are another matter entirely. Supposedly, medical device manufacturers, and software correctness people, have all been thoroughly indoctrinated in the history of Therac-25, a radiation machine from the mid-80’s whose poor software engineering (and user interface design) directly led to several deaths. This article seems to indicate that those lessons were never properly absorbed.

What’s perhaps even more disturbing is that nobody seems to have been deeply bothered when the radiation planning software crashed on them! Did it save their work? Maybe you should double check? Ultimately, the radiation machine just does what it’s told, and the software than plans out the precise dosing pattern is responsible for getting it right. Well, if that software is unreliable (which the article clearly indicates), you shouldn’t use it again until it’s fixed!

What I’d like to know more about, and which the article didn’t discuss at all, is what engineering processes, third-party review processes, and certification processes were used. If there’s anything we’ve learned about voting systems, it’s that the federal and state certification processes were not up to the task of identifying security vulnerabilities, and that the vendors had demonstrably never intended their software to resist the sorts of the attacks that you would expect on an election system. Instead, we’re told that we can rely on poll workers following procedures correctly. Which, of course, is exactly what the article indicates is standard practice for these medical devices. We’re relying on the device operators to do the right thing, even when the software is crashing on them, and that’s clearly inappropriate.

Writing “correct” software, and further ensuring that it’s usable, is a daunting problem. In the voting case, we can at least come up with procedures based on auditing paper ballots, or using various cryptographic techniques, that allow us to detect and correct flaws in the software (although getting such procedures adopted is a daunting problem in its own right, but that’s a story for another day). In the aviation case, which I admit to not knowing much about, I do know they put in sanity-checking software, that will detect when the the more detailed algorithms are asking for something insane and will override it. For medical devices like radiation machines, we clearly need a similar combination of mechanisms, both to ensure that operators don’t make avoidable mistakes, and to ensure that the software they’re using is engineered properly.

Will they ever learn? Hollywood still pursuing DRM

In today’s New York Times, we read that Hollywood is working on a grand unified video DRM scheme intended to allow for video portability, such as, for example, when you visit a hotel room, you’d like to have your videos with you.

What’s sad, of course, is that you can have all of this today with very little fuss. I use iTiVo to extract videos from my TiVo, transcoding them to an iPhone-compatible format. I similarly use Fairmount to rip DVDs to my hard drive, making them easy to play later without worrying about the physical media getting damaged or lost. But if I want to download video, I have no easy mechanism to download non-DRM content. BitTorrent gives access to many things, including my favorite Top Gear, which I cannot get through any other channel, but many things I’d like aren’t available, and of course, there’s the whole legality issue.

I recently bought a copy of Disney/Pixar’s Up (Blu-ray), which includes a “Digital Copy” of some sort that’s rippable, but the other ones are rippable as well (even the Bluray), so I haven’t bothered to sort out how the “Digital Copy” works.

(UPDATE: the disc contains Windows and Mac executables which will ask the user for an “activation code” which is then sent to a Disney server which responds with some sort of decryption key. The resulting file is then installed in iTunes or Windows Media Player with their native DRM restrictions. The Disney server, of course, wants you to set up an account, and they’re working up some sort of YouTube-ish streaming experiences for movies where you’ve entered an activation code.)

So what exactly are the Hollywood types cooking up? There are no technical details in the article, but the broad idea seems to be that you authenticate as yourself from any device, anywhere, and then the central server will let you at “your” content. It’s unclear the extent to which they have an offline viewing story, such as you might want to do on your computer on an airplane. One would imagine they would download an encrypted file, perhaps customized for you, along with a dedicated video player that keeps the key material hidden away through easily broken, poorly conceived mechanisms.

It’s not like we haven’t been here before. I just wonder if we’ll have a repeat of the ill-fated SDMI challenge.

Advice on stepping up to a better digital camera

This is a bit off from the usual Freedom to Tinker post, but with tomorrow being “Black Friday” and retailers offering some steep discounts on consumer electronics, many Tinker readers will be out there buying gear or will be offering buying advice to their friends.

Over the past several months, several friends of mine have mentioned that they were considering “moving up” to a D-SLR camera and asked me for advice. I’ve been what you might term a “serious amateur” photographer since high school, when I was the head photographer for the school yearbook and newspaper. (It was a non-trivial issue for me to decide whether to make my career in photography or in computers.)

To address this, I wrote a guide to upgrading your digital camera. I’ve written this for a non-technical audience. Pass it around and enjoy.

DRM by any other name: The latest from Hollywood

Sunday’s New York Times had an article, Studios’ Quest for Life After DVDs. To nobody’s surprise, consumers want to have convenient access to “their” media, wherever they happen to be, without all the annoying restrictions that come into play when you add DRM to the picture. To many people’s surprise, sales of DVDs (much less Blu-ray) are in trouble.

In the third quarter, studios’ home entertainment divisions generated about $4 billion, down 3.2 percent from a year ago, according to the Digital Entertainment Group, a trade consortium. But digital distribution contributed just $420 million, an increase of 18 percent.

Given that DVDs are really a luxury good (versus, say, food or electricity), the 3.2 percent drop seems like Hollywood is getting off easy. The growth in digital distribution is clearly getting attention, though. What’s going on here? I imagine several things. People sometimes miss their shows. Maybe the cable went out. Maybe the TiVo crashed. Maybe they’re on the road. Drop $2 at the iTunes Store and you’re good to go. That’s attractive and it’s real money.

Still, the article goes on to talk about… yet more DRM.

Standing in the way are technology hurdles — how to let consumers play a video on various devices without letting them share it with 10,000 close friends on a pirate site — and the reluctance of studios to cooperate too closely with rivals for reasons of antitrust scrutiny and sheer competitiveness.

And piracy, at least conceptually, would be less of a worry. The technology [Disney’s Keychest] rests on cloud computing, in which huge troves of data are stored on remote servers so users have access from anywhere. Movies would be streamed from the cloud and never downloaded, making them harder to pirate.

Of course, this is baloney. If it’s going to work on my iPhone while I’m sitting in an airplane, the entire video needs to be stored there in advance. Furthermore, if the video is supposed to be “high definition,” that’s a bare minimum of 5 megabits/sec. (Broadcast HD is 20 megabits/sec and Blu-ray is 48 megabits/sec.) Most home DSL or cable modem connections either will never go that fast, or certainly cannot maintain those speeds without hiccups, particularly when sharing the line with other users. To do high quality video, you either have to have a real broadcast medium (cable, over-the-air, or satellite) or you have to download in advance and store on a hard drive.

And, of course, once you’ve stored the video, it’s just not that hard to extract it. And it always will be. The challenge for Hollywood is to change the incentives of the game. Maybe sell me a flat-rate subscription. Maybe bundle it with my DSL provider. But make the experience compelling enough and cheap enough, and I’ll do it. I regularly extract video from my TiVo and copy it to my iPhone via third-party software. It’s practically painless and it happens to yield files that I could share with the world, but I don’t. Why? Because there’s real downside (I’d rather not get sued, thanks), and no particular upside.

So, dearest Hollywood executive, consider that selling your content for a reduced price, with no DRM, is not the same thing as “giving it away.” If you allow third-parties to license your content and distribute it without DRM, you can still go after the “pirates”, yet you’ll allow normal people to enjoy your work without making them suffer for it. Yes, you may have kids copying content from one to the next, just like we used to do dubbing cassette tapes, but those incremental losses can and will be offset by the incremental gains of people enjoying your work and hitting the “buy” button.

Antisocial networking

I just got my invitation to Google Wave. The prototype that’s now public doesn’t have all of the amazing features in the original video demos. At this point, it’s pretty much just a way of collecting IM-style conversations all in one place. But several of my friends are already there, and I’ve had a few conversations there already.

How am I supposed to know that there’s something new going on at Wave? Right now, I need to keep a tab open in my browser and check in, every once in a while, to see what’s up. Right now, my standard set of tabs includes my Gmail, calendar, RSS reader, New York Times homepage, Facebook page, and now Google Wave. Add in the occasional Twitter tab (or dedicated Twitter client, if I feel like running it) plus I’ll occasionally have an IM window open. All of these things are competing for my attention when I’m supposed to be getting real work done.

A common way that people try to solve this problem is by building bridges between these services. If you use Twitter and Facebook, there are several ways to arrange for your tweets to show up at Facebook (bewildering Facebook users with all the #hashtags and @references) and there are also a handful of ways for getting data out of Facebook. I’d been using FriendFeed as a central hub for all this, but it would sometimes stop working for days at a time. Now that they’ve been bought out by Facebook, maybe this will shake itself out.

The bigger problem is that these various vendors and technologies have different data models for visibility and for how metadata is represented. In Twitter, everything is default-public, follow-up comments are first-class objects in the system, and there’s effectively no metadata outside of the message, causing Twitter users to have adopted a variety of seemingly obscure conventions (e.g., “RT” to indicate a retweet of some other tweet). Contrast this with Facebook, where comments are a very different sort of message from the parent messages, where they have all sorts of security rules (that nobody really understands) about who can see what, and where there is actually structure to a message. If I link to a Youtube video, it gets magically embedded, versus the annoying URL shorteners that people have to use to shoehorn messages into Twitter.

Comments are a favorite area for people to complain. Twitter comments are often implicit with the @username tags. If I’m following a friend and a friend-of-my-friend comments on one of their tweets, I won’t necessary see it. In Facebook, I have a better shot at seeing those comments. But what if I wrote a blog post here at Freedom to Tinker, which Facebook nicely picks it up and makes it look just like I posted a note on my Facebook page. Now we’ll have comments on Freedom to Tinker and more comments inside Facebook which won’t intermingle. Of course, thanks to FriendFeed, a tweet will (probably) be automatically generated when I post this, causing some small amount of Twitter commenting traffic, and there may be comments within FriendFeed itself as well as Google Reader commentary (which is also different from Google Reader’s “share with note” commentary).

Given these disparate data models, there’s no easy way to unify Twitter and Facebook, much less the commenting disaspora, even assuming you could sort out the security concerns and you could work around Facebook’s tendency to want to restrict the flow of data out of its system. This is all the more frustrating because RSS completely solved the initial problem of distributing new blog posts in the blog universe. I used to keep a bunch of tabs open to various blog-like things that I followed, but that quickly proved unwieldy, whereas an RSS aggregator (Google Reader, for me) solved the problem nicely. Could there ever be a social network/microblogging aggregator?

There are no lack of standards-in-the-wings that would like to do this. (See, for example, OpenMicroBlogging, or our own work on BirdFeeder.) Something like Google Wave could subsume every one of these platforms, although I fear that integrating so many different data models would inevitably result in a deeply clunky UI.

In the end, I think the federation ideas behind Google Wave and BirdFeeder, and good old RSS blog feeds, will ultimately win out, with interoperability between the big vendors, just like they interoperate with email. Getting there, however, isn’t going to happen easily.