April 24, 2014

avatar

NJ Voting-machine trial update

Earlier this month I testified in Gusciora v. Corzine, the trial in which the plaintiffs argue that New Jersey’s voting machines (Sequoia AVC Advantage) can’t be trusted to count the votes, because they’re so easily hacked to make them cheat.

I’ve previously written about the conclusions of my expert report: in 7 minutes you can replace the ROM and make the machine cheat in every future election, and there’s no practical way for the State to detect cheating machines (in part because there’s no voter-verified paper ballot).

The trial started on January 27, 2009 and I testified for four and a half days. I testified that the AVC Advantage can be hacked by replacing its ROM, or by replacing its Z80 processor chip, so that it steals votes undetectably. I testified that fraudulent firmware can also be installed into the audio-voting daughterboard by a virus carried through audio-ballot cartridges. I testified about many other things as well.

Finally, I testified about the accuracy of the Sequoia AVC Advantage. I believe that the most significant source of inaccuracy is its vulnerability to hacking. There’s no practical means of testing whether the machine has been hacked, and certainly the State of New Jersey does not even attempt to test. If we could somehow know that the machine has not been hacked, then (as I testified) I believe the most significant _other_ inaccuracy of the AVC Advantage is that it does not give adequate feedback to voters and pollworkers about whether a vote has been recorded. This can lead to a voter’s ballot not being counted at all; or a voter’s ballot counting two or three times (without fraudulent intent). I believe that this error may be on the order of 1% or more, but I was not able to measure it in my study because it involves user-interface interaction with real people.

In the hypothetical case that the AVC Advantage has not been hacked, I believe this user-interface source of perhaps 1% inaccuracy would be very troubling, but (in my opinion) is not the main reason to disqualify it from use in elections. The AVC Advantage should be disqualified for the simple reason that it can be easily hacked to cheat, and there’s no practical method that will be sure of catching this hack.

Security seals. When I examined the State’s Sequoia AVC Advantage voting machines in July 2008, they had no security seals preventing ROM replacement. I demonstrated on video (which we played in Court in Jan/Feb 2009) that in 7 minutes I could pick the lock, unscrew some screws, replace the ROM with one that cheats, replace the screws, and lock the door.

In September 2008, after the State read my expert report, they installed four kinds of physical security seals on the AVC Advantage. These seals were present during the November 2008 election. On December 1, I sent to the Court (and to the State) a supplemental expert report (with video) showing how I could defeat all of these seals.

In November/December the State informed the Court that they were changing to four new seals. On December 30, 2008 the State Director of Elections, Mr. Robert Giles, demonstrated to me the installation of these seals onto the AVC Advantage voting machine and gave me samples. He installed quite a few seals (of these four different kinds, but some of them in multiple places) on the machine.

On January 27, 2009 I sent to the Court (and to the State) a supplemental expert report showing how I could defeat all those new seals. On February 5th, as part of my trial testimony I demonstrated for the Court the principles and methods by which each of those seals could be defeated.

On cross-examination, the State defendants invited me to demonstrate, on an actual Sequoia AVC Advantage voting machine in the courtroom, the removal of all the seals, replacement of the ROM, and replacement of all the seals leaving no evidence of tampering. I then did so, carefully and slowly; it took 47 minutes. As I testified, someone with more practice (and without a judge and 7 lawyers watching) would do it much faster.

avatar

The Future of Smartphone Platforms

In 1985, I got my very first home computer: a Commodore Amiga 1000. At the time, it was awesome: great graphics, great sound, “real” multitasking, and so forth. Never mind that you spent half your life shuffling floppy disks around. Never mind that I kept my head full of Epson escape codes to use with my word processing program to get what I wanted out of my printer. No, no, the Amiga was wonderful stuff.

Let’s look at the Amiga’s generation. Starting with the IBM PC in 1981, the PC industry was in the midst of the transition from 8-bit micros (Commodore 64, Apple 2, Atari 800, BBC Micro, TI 99/4a, etc.) to 16/32-bit micros (IBM PC, Apple Macintosh, Commodore Amiga, Atari ST, Acorn Archimedes, etc.). These new machines each ran completely unrelated operating systems, and there was no consensus as which would be the ultimate winner. In 1985, nobody would have declared the PC’s victory to have been inevitable. Regardless, we all know how it worked out: Apple developed a small but steady market share, PCs took over the world (sans IBM), and the other computers faded away. Why?

The standard argument is “network effects.” PCs (and to a lesser extent Macs) developed sufficient followings to make them attractive platforms for developers, which in turn made them attractive to new users, which created market share, which created resources for future hardware developments, and on it went. The Amiga, on the other hand, became popular only in specific market niches, such as video processing and editing. Another benefit on the PC side was that Microsoft enabled clone shops, from Compaq to Dell and onward, to battle each other with low prices on commodity hardware. Despite the superior usability of a Mac or the superior graphics and sound of an Amiga, the PC came away the winner.

What about cellular smartphones then? I’ve got an iPhone. I have friends with Windows Mobile, Android, and Blackberry devices. When the Palm Pre comes out, it should gain significant market share as well. I’m sure there are people out there who love their Symbian or OpenMoko phones. The level of competition, today, in the smartphone world bears more than a passing resemblance to the competition in the mid-80′s PC market. So who’s going to win?

If you believe that the PCs early lead and widespread adoption by business was essential to its rise, then you could expect the Blackberry to win out. If you believe that the software/hardware coming from separate vendors was essential, then you’d favor Windows Mobile or Android. If you’re looking for network effects, look no farther than the iPhone. If you’re looking for the latest, coolest thing, then the Palm Pre sure does look attractive.

I’ll argue that this time will be different, and it’s the cloud that’s going to win. Right now, what matters to me, with my iPhone, is that I can get my email anywhere, I can make phone calls, and I can do basic web surfing. I occasionally use the GPS maps, or even watch a show purchased from the iTunes Store, but if you took those away, it wouldn’t change my life much. I’ve got pages of obscure apps, but none of them really lock me into the platform. (Example: Shazam is remarkably good at recognizing songs that it hears, but the client side of it is a very simple app that they could trivially port to any other smartphone.) On the flip side, I’m an avid consumer of Google’s resources (Gmail, Reader, Calendar, etc.). I would never buy a phone that I couldn’t connect to Google. Others will insist on being able to connect to their Exchange Server.

At the end of the day, the question isn’t whether a given smartphone interoperates with your friend’s phones, but whether it interoperates with your cloud services. You don’t need an Android to get a good mobile experience with Google, and you don’t need a Windows Mobile phone to get a good mobile experience with Exchange. Leaving one smartphone and adopting another one is, if anything, easier than transitioning with a traditional not-smartphone, since you don’t have to monkey as much with moving your address book around. As such, I think it’s reasonable to predict, in ten years, that we’ll still have at least one smartphone vendor per major cellular carrier, and perhaps more.

If we have further consolidation in the carrier market, that would put pressure on the smartphone vendors to cut costs, which could well lead to consolidation of the smartphone vendors. We could certainly also imagine carriers pushing on the smartphone vendors to include or omit particular features. We see plenty of that already. (Example: can you tether your laptop to a Palm Pre via Bluetooth? The answer seems to be a moving target.) Historically, the U.S. carriers are somewhat infamous for going out of their way to restrict what phones can do. Now, that seems to be mostly fixed, and for that, at least, we can thank Apple.

Let a thousand smartphones bloom? I sure hope so.

avatar

Federal Health IT Effort Is Making Progress, Could Benefit from More Transparency

President Obama has indicated that health information technology (HIT) is an important component of his administration’s health care goals. Politicians on both sides of the aisle have lauded the potential for HIT to reduce costs and improve care. In this post, I’ll give some basics about what HIT is, what work is underway, and how the government can get more security experts involved.

We can coarsely break HIT into three technical areas. The first area is the transition from paper to electronic records, which involves surprisingly many subtle technical issues like interoperability. Second, development of health information networks will allow sharing of patient data between medical facilities and with other appropriate parties. Third, as a recent National Research Council report discusses, digital records can enable research in new areas, such as cognitive support for physicians.

HIT was not created on the 2008 campaign trail. The Department of Veterans Affairs (VA) has done work in this area for decades, including its widely praised VistA system, which provides electronic patient records and more. Notably, VistA source code and documentation can be freely downloaded. Many other large medical centers also already use electronic patient records.

In 2004, then-President Bush pushed for deployment of a Nationwide Health Information Network (NHIN) and universal adoption of electronic patient records by 2014. The NHIN is essentially a nationwide network for sharing relevant patient data (e.g., if you arrive at an emergency room in Oregon, the doctor can obtain needed records from your regular doctor in Kansas). The Department of Health and Human Services (HHS) funded four consortia to develop smaller, localized networks, partially as a learning exercise to prepare for the NHIN. HHS has held a number of forums where members of these consortia, the government, and the public can meet and discuss timely issues.

The agendas for these forums show some positive signs. Sessions cover a number of tricky issues. For example, participants in one session considered the risk that searches for a patient’s records in the NHIN could yield records for patients with similar attributes, posing privacy concerns. Provided that meaningful conversations occurred, HHS appears to be making a concerted effort to ensure that issues are identified and discussed before settling on solutions.

Unfortunately, the academic information security community seems divorced from these discussions. Whether before or after various proposed systems are widely deployed, members of the community are eventually likely to analyze them. This analysis would be preferable earlier. In spite of the positive signs mentioned, past experience shows that even skilled developers can produce insecure systems. Any major flaws uncovered may be embarrassing, but weaknesses found now would be cheaper and easier to fix than ones found in 2014.

A great way to draw constructive scrutiny is to ensure transparency in federally funded HIT work. Limited project details are often available online, but both high- and low-level details can be hard to find. Presumably, members of the NHIN consortia (for example) developed detailed internal documents containing use cases, perceived risks/threats, specifications, and architectural illustrations.

To the extent legally feasible, the government should make documents like these available online. Access to them would make the projects easier to analyze, particularly for those of us less familiar with HIT. In addition, a typical vendor response to reported vulnerabilities is that the attack scenario is unrealistic (this is a standard response of e-voting vendors). Researchers can use these documents to ensure that they consider only realistic attacks.

The federal agenda for HIT is ambitious and will likely prove challenging and expensive. To avoid massive, costly mistakes, the government should seek to get as many eyes as possible on the work that it funds.

avatar

Hulu abandons Boxee—now what?

In our last installment, I detailed the trials and tribulations of my attempt to integrate legal, Internet-sourced video into my home theater via a hacked AppleTV, running Boxee, getting its feed from Hulu.

One day later (!), Hulu announced it was all over.

Later this week, Hulu’s content will no longer be available through Boxee. While we never had a formal relationship with Boxee, we are under no illusions about the likely Boxee user response from this move. This has weighed heavily on the Hulu team, and we know it will weigh even more so on Boxee users.

Our content providers requested that we turn off access to our content via the Boxee product, and we are respecting their wishes. While we stubbornly believe in this brave new world of media convergence — bumps and all — we are also steadfast in our belief that the best way to achieve our ambitious, never-ending mission of making media easier for users is to work hand in hand with content owners. Without their content, none of what Hulu does would be possible, including providing you content via Hulu.com and our many distribution partner websites.

(emphasis mine)

On Boxee’s blog, they wrote:

two weeks ago Hulu called and told us their content partners were asking them to remove Hulu from boxee. we tried (many times) to plead the case for keeping Hulu on boxee, but on Friday of this week, in good faith, we will be removing it. you can see their blog post about the issues they are facing.

At least I’m not to blame. Clearly, those who own content are threatened by the ideas we discussed before. Why overpay for cable when you can get the three shows you care about from Hulu for free?

Also interesting to note is the acknowledgment that there was no formal relationship between Hulu and Boxee. That’s the power of open standards. Hulu was publishing bits. Boxee was consuming those bits. The result? An integrated system, good enough to seriously consider dropping your cable TV subscription. Huzzah.

Notable by its absence: Hulu content is also supported on the Xbox 360 or Playstation 3 via PlayOn, which serves pretty much the same niche as Boxee. Similarly, there’s an XBMC Hulu plugin (recall that Boxee is based on the open-source XBMC project). We don’t know whether Hulu will continue to work with these other platforms or not. Hulu seems to be taking the approach of asking Boxee nicely to walk away. Will they ask the other projects to pull their Hulu support as well? Will all of those projects actually agree to pull the plug or will Hulu be forced to go down the failed DRM road?

It’s safe to predict that it won’t be pretty. My AppleTV can run XBMC just as well as it can run Boxee, which naturally returns us to the question of the obsolescence of cable TV.

There’s a truism that, if your product is going to become obsolete, you should be the one who makes it obsolete. Example: hardwired home telephones are going away. In rich countries, people use their cell phone so much that they eventually notice that they don’t need the landline any more. In poor countries, the cost of running wires is too high, so it’s cheaper to deploy cellular phones. Still, guess who runs the cell phone networks? It’s pretty much the same companies who run the wired phone networks. They make out just fine (except, perhaps, with international calling, where Skype and friends provide great quality for effectively nil cost).

Based on what I’ve observed, it’s safe to predict that cable TV, satellite TV, and maybe even over-the-air TV, are absolutely, inevitably, going to be rendered obsolete by Internet TV. Perhaps they can stave off the inevitable by instituting a la carte pricing plans, so I could get the two cable channels I actually care about and ignore the rest. But if they did that, their whole business model would be smashed to bits.

For my prediction to pan out, we have to ask whether the Internet can handle all that bandwidth. As an existence proof, it’s worth pointing out that I can also get AT&T U-verse for a price competitive with my present Comcast service. AT&T bumps up your DSL to around 30Mb/sec, and you get an HD DVR that sucks your shows down over your DSL line. They’re presumably using some sort of content distribution network to keep their bandwidth load reasonable, and the emphasis is on real-time TV channel watching, which lowers their need to store bits in the CDN fabric. Still, it’s reasonable to see how U-verse could scale to support video on demand with Hulu or Netflix’s full library of titles.

U-verse does a good enough job of pretending to be just like cable that it’s completely uninteresting to me. But if their standards were open and free of DRM, then third parties, like TiVo or Boxee, could build compatible boxes and we’d really have something interesting. I’d drop my cable for that.

(One of my colleagues has U-verse, and he complains that, when his kids watch TV, he can feel the Internet running slower. “Hey you kids, can you turn off the TV? I’m trying to do a big download.” It’s the future.)

avatar

TiVo, AppleTV, Boxee, and the future of HD television delivery

I don’t watch as much TV as I once did. Yet, I’m still paying Comcast every month, as they’re the only provider who will sell me HD service compatible with my TiVo-HD. Sadly, Comcast is far from ideal. I’m regularly frustrated at their inability to debug their signal quality problems. (My ABC-HD and PBS-HD signals are right on the edge, in terms of signal quality, so any slight degradation makes those channels unwatchable through the MPEG block errors, which seems to happen on an irregular basis.) Comcast customer service wants me to sit around all day waiting for a tech to come out when the problem has nothing whatsoever to do with my house. When I’ve attempted to report the signal strength measurements I’ve taken and how they vary from channel to channel, I’ve found I might as well be speaking to a brick wall.

Yes, I know I could put an old-school antenna on the roof and feed it into my TiVo. That would do pretty good for the local channels, but then why am I paying Comcast at all? Answer: for the handful of shows that we watch from cable channels. More than one person has asked me why I don’t just download these shows online and cut the cable. You can get Comedy Central programming from their web site. You can get all sorts of things from Hulu.com. All free and legal!

To that end, I’ve hacked my AppleTV with the latest patchstick, a remarkably painless process, and now my AppleTV, running Boxee, based on the open-source xbmc project, can play DVD rips from my file server (including DVD menus), just about anything I download from BitTorrent [see sidebar], and can get at content from a variety of streaming providers, including Comedy Central and Hulu.com, theoretically covering enough ground that I could legitimately consider dropping the Comcast subscription altogether.

In practice, the Internet TV experience was a let-down. I’ve got AT&T’s “Elite” DSL package (“up to 6Mb”, which is pretty close to what I see in practice), so I’ve got enough bandwidth for streaming. What I actually see is not utilizing that bandwidth. Comedy Central is not giving anywhere near 30 frames per second. It’s jumpy, unwatchable. Hulu has moments of greatness (i.e., higher resolution and quality relative to the non-HD channels that Comcast feeds me, but nowhere near broadcast HD) but Hulu also freezes up, sometimes for seconds at a time. If Boxee implemented TiVo-like Season Passes, they could download my shows in advance and yield a real winner of an experience. Or TiVo could implement Hulu support, as they already have batch downloads of Internet video content, mostly from Amazon, albeit with low SD quality and unacceptable self-destructing DRM.

Astute readers will note that I have several other options left to pursue. I could sign up for an unlimited Netflix subscription and have access to their streaming library (either to my TiVo or to my Boxee/AppleTV). I could also “subscribe” to the shows that I care about through Apple’s iTunes Store. (That’s how I’ve been watching Entourage, since I can’t otherwise justify the $20/month that I’d have to pay Comcast for HBO. See also the sidebar.)

Netflix doesn’t have the current TV shows that I want, and the iTunes store is pretty pricey. Those Entourage episodes are $2 each for 30 minutes of SD quality video. iTunes HD content, when available, is pretty much broadcast HD quality. Good stuff. iTunes SD content looks fine on an iPhone, but has a variety of problems on a proper HD set, most notably that any dark colors are pulled down to 100% black, presumably to improve compression. Very distracting. Regardless, friends I have with Netflix streaming seem to swear by it, and the iTunes Store clearly provides a good experience, albeit with high prices.

Clearly, Comcast is in deep trouble. Their product is expensive. Their customer service is lacking. Similar issues can be expected for other cable TV vendors, much less the satellite people. The Internet already has sufficient capacity to deliver the non-broadcast shows that I follow, directly to my TV. All the pieces are in place and they’re starting to work well together. The only missing piece is the business model for the future of online TV delivery. Hulu.com, for example, probably thinks they have to require video streaming so they can force you to watch ads. If you could download it, you could skip the ads and there goes their revenue.

I figure the one true hope in all of this is the ever-declining cost of serving up content. At some distant point in the future, the cost of delivering tens of megabits per second of video, for several hours every day, to all of the homes who might want it, will eventually be small enough to not matter any more. Once we get there, the people who make shows can sell them direct to the consumer, insert occasional and targeted ads, and still come out ahead. It could be a long wait.

[Sidebar: BitTorrent is a brilliant system, from a technical perspective, but it was never designed to provide any anonymity to its users. If you join the torrent for, say, an HBO show, HBO can trivially observe that you (or, at least, your IP address) is there, giving them grounds to go after you in one form or another. From that perspective, you'd have to be insane to download a mainstream movie or TV show from BitTorrent, or you'd have to do something terribly anti-social, like tunnel your entire BitTorrent session through Tor, which Tor was never designed to handle, although there are several designs to improve Tor or anonymize BitTorrent. So then, what shows do I feel safe to download via BitTorrent? So far, only the latest episodes of the BBC's Top Gear. They air in the U.K. six months to a year ahead of their appearance on BBC America and availability on the U.S. iTunes Store. If there were a way to get these shows in the U.S. simultaneous with their British release, I'd happily pay for the privilege, even the $2 rate at the iTunes Store, but I'm not given that option at any price.]

avatar

New Internet? No Thanks.

Yesterday’s New York Times ran a piece, “Do We Need a New Internet?” suggesting that the Internet has too many security problems and should therefore be rebuilt.

The piece has been widely criticized in the technical blogosphere, so there’s no need for me to pile on. Anyway, I have already written about the redesign-the-Net meme. (See Internet So Crowded, Nobody Goes There Anymore.)

But I do want to discuss two widespread misconceptions that found their way into the Times piece.

First is the notion that today’s security problems are caused by weaknesses in the network itself. In fact, the vast majority of our problems occur on, and are caused by weaknesses in, the endpoint devices: computers, mobile phones, and other widgets that connect to the Net. The problem is not that the Net is broken or malfunctioning, it’s that the endpoint devices are misbehaving — so the best solution is to secure the endpoint devices. To borrow an analogy from Gene Spafford, if people are getting mugged at bus stops, the solution is not to buy armored buses.

(Of course, there are some security issues with the network itself, such as vulnerability of routing protocols and DNS. We should work on fixing those. But they aren’t the problems people normally complain about — and they aren’t the ones mentioned in the Times piece.)

The second misconception is that the founders of the Internet had no plan for protecting against the security attacks we see today. Actually they did have a plan which was simple and, if executed flawlessly, would have been effective. The plan was that endpoint devices would not have remotely exploitable bugs.

This plan was plausible, but it turned out to be much harder to execute than the founders could have foreseen. It has become increasingly clear over time that developing complex Net-enabled software without exploitable bugs is well beyond the state of the art. The founders’ plan is not working perfectly. Maybe we need a new plan, or maybe we need to execute the original plan better, or maybe we should just muddle through. But let’s not forget that there was a plan, and it was reasonable in light of what was known at the time.

As I have said before, the Internet is important enough that it’s worthwhile having people think about how it might be redesigned, or how it might have been designed differently in the first place. The Net, like any large human-built institution, is far from perfect — but that doesn’t mean that we would be better off tearing it down and starting over.

avatar

Final version of Government Data and the Invisible Hand

Thanks to the hard work of our patient editors at the Yale Journal of Law and Technology, my coauthors and I can now share the final version of our paper about online transparency, Government Data and the Invisible Hand.

If you have read the first version, you know that our paper is informed by a deep disappointment with the current state of the federal government’s Internet presence. A naive viewer, like we once were, might look at the chaos of clunky sites in .gov and entertain doubts about the webmasters who run those sites. But that would be—was, on our part—a mistake. We’re happy to set the record straight today.

Barack Obama’s web team is certainly one of the best that has ever been assembled. His staff did a fantastic job on the campaign site, and produced an also excellent, if slightly less dynamic, transition site at Change.gov. On its way to the White House, however, a team comprised of many of the same people seemed to lose its mojo. The complaints about the new Whitehouse.gov site—slow to be updated, lacking in interactivity—are familiar to observers of other .gov sites throughout the government.

What happened? It’s not plausible to suppose that Obama’s staffers have somehow gotten worse as they have moved from campaign to transition to governance. Instead, they have faced an increasingly stringent and burdensome array of regulations as they have become progressively more official. The transition was a sort of intermediate phase in this respect, and the new team now faces the Presidential Records Act, the Paperwork Reduction Act, and a number of other pre-Internet statutory obligations. This experience teaches that the limitations of the federal web reflect the thicket of rules to which such sites are subject—not the hardworking people who labor under those rules.

One of the most exciting things about the new administration’s approach to online media is the way it seeks to enable federal webmasters to move beyond some of the limitations of dated policies, using their expertise to leverage government data online.

My coauthors and I look forward to continuing to work on these issues. We are humbled to recognize the remarkable reservoir of talent and energy that is being brought to bear on the problem, from both within and beyond government.

avatar

The Future of News: We're Lucky They Haven't Tried Macropayments

Regular readers will know that the newspaper industry is in dire shape: revenues off by 20% in just the last year, with more than 15,000 jobs lost in that period. This map tells the story better than any writing could. The market capitalizations of newspaper firms, which reflect investor expectations about future performance, have fallen even more precipitously. In short, it’s hard to exaggerate how dire the situation facing the industry is. If you were in charge of a newspaper, survival in any form possible would rationally be your all-consuming focus.

Walter Isaacson, the former editor of TIME magazine and current President of the Aspen Institute, wrote a column last week arguing that newspapers should squeeze revenue out of their web sites through “micropayments.” It’s an idea with a long, but not very successful, history: Isaacson himself points out that Ted Nelson, the inventor of hypertext, imagined micropayments for written content back in the early 1960s.

Small payments, on the order of a dollar, work well for some kinds of highly valued, contextualized content, like a book to your Kindle or a song to your iPod. But “micro” payments on the order of a nickel—the figure Isaacson mentions for a hypothetical news story—have never taken off. Transaction costs, caused by things like credit card processing, are usually cited as the reason, but I’ve never found that view persuasive: It’s not hard to set up a system in which micro transactions are aggregated into parcels of at least a few dollars before being channeled through our existing credit card infrastructure.

The Occam’s razor explanation for the persistent failure of micropayments is much simpler: People hate them. The niggling feeling of being charged a marginal amount for each little thing you do exacts a psychological cost that often suffices to undermine the pleasure of the good or service you receive on an a la carte basis. That’s why monthly gym memberships, pay-one-price amusement parks, and subscription services like Netflix or, come to think of it, regular cable are popular, even when a la carte options would be (financially) cheaper for consumers.

Michael Kinsley, the former editor of Slate, responded to Isaacson in a piece headlined You Can’t Sell News by the Slice. His basic message: We tried getting users to pay for content online—in Slate’s case, as an inexpensive annual subscription—and it didn’t work. One problem noted by both Isaacson and Kinsley is that readers have come to expect content to be free, and when individual papers have tried to start charging, they’ve failed.

What can the papers do? Isaacson is on to something when he says:

Another group that benefits from free journalism is Internet service providers. They get to charge customers $20 to $30 a month for access to the Web’s trove of free content and services. As a result, it is not in their interest to facilitate easy ways for media creators to charge for their content. Thus we have a world in which phone companies have accustomed kids to paying up to 20 cents when they send a text message but it seems technologically and psychologically impossible to get people to pay 10 cents for a magazine, newspaper or newscast.

If struggling news outlets were really bold—and grimly realistic about how little they have to lose, from a business point of view—they might decide to seek revenue at the ISP level. The plan: Begin segmenting site visitors by ISP, and charge ISPs for content. Under this plan, if your ISP has paid the news syndicate, you get to see the news. If you try to visit one of the participating sites and your ISP has not paid the syndicate, then you see a different page, possibly a page that urges you to call your ISP and demand access to the syndicated content. It’s the same model controversially adopted by ESPN360.com (go ahead, check and see if you have access or not). I imagine a hypothetical where a handful of top papers, such as the New York Times, Washington Post, and LA Times, jointly with TIME and Newsweek, form a syndicate that charges ISPs a fixed rate per user-month of access. ISPs, in other words, would make a small number of large (“macro”) payments to content providers, and these would be a primary source of revenue for these outlets, along with advertising.

I am, as Paul Ohm might urge me to say, NAL (Not a Lawyer), but I suspect that such a syndicate might well pass antitrust scrutiny. The syndicate would certainly not make it hard to find news on the web: it would simply make it hard to find certain high quality sources. Participating publications might elect to offer free access to certain population segments, who cannot pay or would experience a concentrated public interest harm, such as users from developing countries. ESPN360, for example, reportedly gives free access to anyone who surfs in from a .edu domain. (No doubt this is also a marketing tactic.)

For some definitions of the term “net neutrality,” such a move by news providers would be a violation of net neutrality. Other definitions of the term would place this behavior outside of its scope. But no matter how you look at it, the substance of such a move would be troubling: it would amount to removing these great sources of journalism from the Internet proper, and placing them instead in a kind of walled garden. If that trend took off and became very widespread, it could amount to a return to the bad old days of walled garden services like AOL and Prodigy.

A second good argument that this situation would be undesirable is that it would force all users of a particular ISP to pay for content that only some users want to access. There’s a sense in which such cross-subsidies are already the norm: those who use their ISP subscriptions for email and web browsing subsidize the heavier network usage of video aficionados and other leading-edge consumers who are way out on the tail and use the lion’s share of the bandwidth. But this, in its deliberateness, would be a new and different level.

A third good argument against this idea is that it would introduce awkward relationships between news outlets and ISPs, in a manner that would impair news coverage of the Internet and telecommunications industries.

Fourthly, there’s the possibility that people will pirate the blocked content systematically by using systems like TOR to access the news content via approved endpoints. (My own thought is that this probably isn’t the strongest argument, since many users are uninterested in this sort of maneuver or even the easy Firefox plugin that would likely arise to enable it. Plus, the content syndicate would pool its resources toward aggressive litigation to stem this trend. Plus, the payments would be extracted from law abiding ISPs, not individual users.)

I can imagine a potentially compelling case being made that such behavior by content providers should be regulated or outlawed. But today I think it is neither. And given the news industry’s desperation, the fact that such a move would be unpopular could turn out to be moot if they can persuade ISPs to pay. If someone capable and hardworking set out to sell the idea to a group of newspaper and newsmagazine publishers, I fear they might prove quite persuasive.

avatar

Rethinking the voting system certification process

Lawsuits! Everybody’s filing lawsuits. Premier Election Systems (formerly Diebold) is suing SysTest, one of the EAC’s testing authorities (or, more properly, former testing authorities, now that the EAC is planning to suspend their accreditation). There’s also a lawsuit between the State of Ohio and Premier over whether or not Premier’s voting systems satisfy Ohio’s requirements. Likewise, ES&S is being sued by San Francisco, the State of California, and the state of Oregon. A Pennsylvania county won a judgment against Advanced Voting Systems, after AVS’s systems were decertified (and AVS never even bothered showing up in court to defend themselves). And that’s just scratching the surface.

What’s the real problem here? Electronic voting systems were “certified”, sold, deployed, and then turned out to have a variety of defects, ranging from “simple” bugs to a variety of significant security flaws. Needless to say, it takes time, effort, and money to build better voting machines, much less to push them through the certification process. And nobody really understands what the certification process even is anymore. In the bad old days, a “federally certified voting system” was tested by one of a handful of “independent testing authorities” (ITAs), accredited by the National Association of State Election Directors, against the government’s “voluntary voting system guidelines” (the 2002 edition, for the most part). This original process demonstrably failed to yield well-engineered, secure, or even particularly usable voting systems. So how have things improved?

Now, NASED has been pushed aside by the EAC, and the process has been glacial. So far as I can tell, no electronic voting system used in the November 2008 election had code that was in any way different from what was used in the November 2006 election.

Regardless of whether we jettison the DREs and move to optical scan, plenty of places will continue using DREs. And there will be demand for new features in both DREs and optical scanners. And bug fixes. The certification pipeline must be vigilant, yet it needs to get rolling again. In a hurry, but with great caution and care. (Doesn’t sound very feasible, I know.)

Okay, then let’s coerce vendors to build better products! Require the latest standards! While brilliant, in theory, such a process is doomed to continue the practical failures (and lawsuits) that we’re seeing today. The present standards are voluminous. They are also quite vague where it matters because there is no way to write a standard that’s both general-enough to apply to every possible voting system and specific-enough to adequately require good development practices. The present standards err, arguably correctly, on the vague side, which then requires the testing authorities to do some interpretation. Doing that properly requires competent testing labs and competent developers, working together.

Unfortunately, they don’t work together at all (never mind issues of competence). The current business model is that developers toil away, perhaps talking to their customers, but not interacting with the certification process at all until they’re “done,” after which they pitch the system over the wall, write a big check, and cross their fingers that everything goes smoothly. If the testing authority shoots it down, they need to sort out why and try again. Meanwhile, you’ve got the Great States of California and Ohio doing their own studies, with testers like yours truly who don’t particularly care what the standards say and are instead focused on whether the machines are robust in the face of a reasonable threat model. Were the problems we found outside of the standards’ requirements? We don’t care because they’re serious problems! Unfortunately, from the vendor’s perspective, they now need to address everything we found, and they have no idea whether or not they’ll get it right before they may or may not face another team of crack security ninjas.

What I want to see is a grand bargain. The voting system vendors open up their development processes to external scrutiny and regulation. In return, they get feedback from the certifying authorities that their designs are sound before they begin prototyping. Then they get feedback that their prototypes are sound before they flesh out all the details. This necessarily entails the vendors letting the analysts in on their bugs lists (one of the California Secretary of State’s recommendations to the EAC), further increasing transparency. Trusted auditors could even look at the long-term development roadmap and make judgments that incremental changes, available in the short term, are part of a coherent long-term plan to engineer a better system. Alternately, the auditors could declare the future plans to be a shambles and refuse to endorse even incremental improvements. Invasive auditing would give election authorities the ability to see each vendor’s future, and thus reach informed decisions about whether to support incremental updates or to dump a vendor entirely.

Where can we look for a a role model for this process? I initially thought I’d write something here about how the military procures weapon systems, but there are too many counter-examples where that process has gone wrong. Instead, let’s look at how houses are built (or, at least, how they should be built). You don’t just go out, buy the lumber and nails, hire people off the street, and get banging. Oh no! You start with blueprints. Those are checked off by the city zoning authorities, the neighborhood beauty and integrity committee, and so forth. Then you start getting permits. Demolition permits. Building permits. Electrical permits. At each stage of construction, city inspectors, the prospective owners, and even the holders of the construction loan, may want to come out and check it out. If, for example, there’s an electrical problem, it’s an order of magnitude easier to address it before you put up the interior walls.

For voting systems, then, who should do the scrutiny? Who should scrutinize the scrutineers? Where’s the money going to come from to pay for all this scrutiny? It’s unclear that any of the testing authorities have the deep skills necessary to do the job. It’s similarly unclear that you can continually recruit “dream teams” of the best security ninjas. Nonetheless, this is absolutely the right way to go. There are only a handful of major vendors in the e-voting space, so recruiting good talent to audit them, on a recurring part-time basis, is eminently feasible. Meta-scrutiny comes from public disclosure of the audit reports. To save some money, there are economies of scale to be gained from doing this at the Federal level, although it only takes a few large states to band together to achieve similar economies of scale.

At the end of the day, we want our voting systems to be the best they can be, regardless of what technology they happen to be using. I will argue that this ultimately means that we need vendors working more closely with auditors, whether we’re considering primitive optical scanners or sophisticated end-to-end cryptographic voting schemes. By pushing the adversarial review process deeper into the development pipeline, and increasing our transparency into how the development is proceeding, we can ensure that future products will be genuine improvements over present ones, and hopefully avoid all these messy lawsuits.

[Sidebar: what about protecting the vendors' intellectual property? As I've argued before, this is what copyrights and patents are about. I offer no objection to vendors owning copyright on their code. Patents are a bit trickier. If the auditors decide that some particular feature should be mandatory and one vendor patents it, then every other vendor could potentially infringe the patent. This problem conceivably happens today, even without the presence of invasive auditors. Short of forbidding voting machine patents as a prerequisite for voting system certification, this issue will never go away entirely. The main thing that I want to do away with, in their entirety, are trade secrets. If you want to sell a voting machine, then you should completely waive any trade secret protection, ultimately yielding a radical improvement in election transparency.]

avatar

Being Acquitted Versus Being Searched (YANAL)

With this post, I’m launching a new, (very) occasional series I’m calling YANAL, for “You Are Not A Lawyer.” In this series, I will try to disabuse computer scientists and other technically minded people of some commonly held misconceptions about the law (and the legal system).

I start with something from criminal law. As you probably already know, in the American criminal law system, as in most others, a jury must find a defendant guilty “beyond a reasonable doubt” to convict. “Beyond a reasonable doubt” is a famously high standard, and many guilty people are free today only because the evidence against them does not meet this standard.

When techies think about criminal law, and in particular crimes committed online, they tend to fixate on this legal standard, dreaming up ways people can use technology to inject doubt into the evidence to avoid being convicted. I can’t count how many conversations I have had with techies about things like the “open wireless access point defense,” the “trojaned computer defense,” the “NAT-ted firewall defense,” and the “dynamic IP address defense.” Many people have talked excitedly to me about tools like TrackMeNot or more exotic methods which promise, at least in part, to inject jail-springing reasonable doubt onto a hard drive or into a network.

People who place stock in these theories and tools are neglecting an important drawback. There are another set of legal standards–the legal standards governing search and seizure–you should worry about long before you ever get to “beyond a reasonable doubt”. Omitting a lot of detail, the police, even without going to a judge first, can obtain your name, address, and credit card number from your ISP if they can show the information is relevant to a criminal investigation. They can obtain transaction logs (think apache or sendmail logs) after convincing a judge the evidence is “relevant and material to an ongoing criminal investigation.” If they have probable cause–another famous, but often misunderstood standard–they can read all of your stored email, rifle through your bedroom dresser drawers, and image your hard drive. If they jump through a few other hoops, they can wiretap your telephone. Some of these standards aren’t easy to meet, but all of them are well below the “beyond a reasonable doubt” standard for guilt.

So by the time you’ve had your Perry Mason moment in front of the jurors, somehow convincing them that the fact that you don’t enable WiFi authentication means your neighbor could’ve sent the death threat, your life will have been turned upside down in many ways: The police will have searched your home and seized all of your computers. They will have examined all of the files on your hard drives and read all of the messages in your inboxes. (And if you have a shred of kiddie porn stored anywhere, the alleged death threat will be the least of your worries. I know, I know, the virus on your computer raises doubt that the kiddie porn is yours!) They will have arrested you and possibly incarcerated you pending trial. Guys with guns will have interviewed you and many of your friends, co-workers, and neighbors.

In addition, you will have been assigned an overworked public defender who has no time for far-fetched technological defenses and prefers you take a plea bargain, or you will have paid thousands of dollars to a private attorney who knows less than the public defender about technology, but who is “excited to learn” on your dime. Maybe, maybe, maybe after all of this, your lawyer convinces the judge or the jury. You’re free! Congratulations?

The police and prosecutors run into many legal standards, many of which are much easier to satisfy than “beyond a reasonable doubt” and most of which are met long before they see an access point or notice a virus infection. By meeting any of these standards, they can seriously disrupt your life, even if they never end up putting you away.