December 25, 2024

Archives for 2009

Optical-scan voting extremely accurate in Minnesota

The recount of the 2008 Minnesota Senate race gives us an opportunity to evaluate the accuracy of precinct-count optical-scan voting. Though there have been contentious disputes over which absentee ballot envelopes to open, the core technology for scanning ballots has proved to be extremely accurate.

The votes were counted by machine (except for part of one county that counts votes by hand), then every single ballot was examined by hand in the recount.

The “net” accuracy of optical-scan voting was 99.99% (see below).
The “gross” accuracy was 99.91% (see below).
The rate of ambiguous ballots was low, 99.99% unambiguous (see below).

My analysis is based on the official spreadsheet from the Minnesota Secretary of State. I commend the Secretary of State for his commitment to transparency in the form of making the data available in such an easy-to-analyze format. The vast majority of the counties use the ES&S M100 precinct-count optical-scanners; a few use other in-precinct scanners.

I exclude from this analysis all disputes over which absentee ballots to open. Approximately 10% of the ballots included in this analysis are optically scanned absentee ballots that were not subject to dispute over eligibility.

There were 2,423,851 votes counted for Coleman and Franken. The “net” error rate is the net change in the vote margin from the machine-scan to the hand recount (not including change related to qualification of absentee ballot envelopes). This was 264 votes, for an accuracy of 99.99% (error, one part in ten thousand).

The “gross” error rate is the total number of individual ballots either added to one candidate, or subtracted from one candidate, by the recount. A ballot that was changed from one candidate to the other will count twice, but such ballots are rare. In the precinct-by-precinct data, the vast majority of precincts have no change; many precincts have exactly one vote added to one candidate; few precincts have votes subtracted, or more than one vote added, or both.

The recount added a total of 1,528 votes to the candidates, and subtracted a total of 642 votes, for a gross change of 2170 (again, not including absentee ballot qualification). Thus, the “gross” error rate is about 1 in 1000, or a gross accuracy of 99.91%.

Ambiguous ballots: During the recount, the Coleman and Franken campaigns initially challenged a total of 6,655 ballot-interpretation decisions made by the human recounters. The State Canvassing Board asked the campaigns to voluntarily withdraw all but their most serious challenges, and in the end approximately 1,325 challenges remained. That is, approximately 5 ballots in 10,000 were ambiguous enough that one side or the other felt like arguing about it. The State Canvassing Board, in the end, classified all but 248 of these ballots as votes for one candidate or another. That is, approximately 1 ballot in 10,000 was ambiguous enough that the bipartisan recount board could not determine an intent to vote. (This analysis is based on the assumption that if the voter made an ambiguous mark, then this ballot was likely to be challenged either by one campaign or the other.)

Caveat: As with all voting systems, including optical-scan, DREs, and plain old paper ballots, there is also a source of error from voters incorrectly translating their intent into the marked ballot. Such error is likely to be greater than 0.1%, but the analysis I have done here does not measure this error.

Hand counting: Saint Louis County, which uses a mix of optical-scan and hand-counting, had a higher error rate: net accuracy 99.95%, gross accuracy 99.81%.

Tech Policy Challenges for the Obama Administration

[Princeton’s Woodrow Wilson School asked me to write a short essay on information technology challenges facing the Obama Administration, as part of the School’s Inaugural activities. Here is my essay.]

Digital technologies can make government more effective, open and transparent, and can make the economy as a whole more flexible and efficient. They can also endanger privacy, disrupt markets, and open the door to cyberterrorism and cyberespionage. In this crowded field of risks and opportunities, it makes sense for the Obama administration to focus on four main challenges.

The first challenge is cybersecurity. Government must safeguard its own mission critical systems, and it must protect privately owned critical infrastructures such as the power grid and communications network. But it won’t be enough to focus only on a few high priority, centralized systems. Much of digital technology’s value—and, today, many of the threats—come from ordinary home and office systems. Government can use its purchasing power to nudge the private sector toward products that are more secure and reliable; it can convene standards discussions; and it can educate the public about basic cybersecurity practices.

The second challenge is transparency. We can harness the potential of digital technology to make government more open, leading toward a better informed and more participatory civic life. Some parts of government are already making exciting progress, and need high-level support; others need to be pushed in the right direction. One key is to ensure that data is published in ways that foster reuse, to support an active marketplace of ideas in which companies, nonprofits, and individuals can find the best ways to analyze, visualize, and “mash up” government information.

The third challenge is to maintain and increase America’s global lead in information technology, which is vital to our prosperity and our role in the world. While recommitting to our traditional strengths, we must work to broaden the reach of technology. We must bring broadband Internet connections to more Americans, by encouraging private-sector investment in high-speed network infrastructure. We must provide better education in information technology, no less than in science or math, to all students. Government cannot solve these problems alone, but can be a catalyst for progress.

The final challenge is to close the culture gap between politicians and technology leaders. The time for humorous anecdotes about politicians who “don’t get” technology, or engineers who are blind to the subtleties of Washington, is over. Working together, we can translate technological progress into smarter government and a more vibrant, dynamic private sector.

Wu on Zittrain's Future of the Internet

Related to my previous post about the future of open technologies, Tim Wu has a great review of Jonathan Zittrain’s book. Wu reviews the origins of the 20th century’s great media empires, which steadily consolidated once-fractious markets. He suggests that the Internet likely won’t meet the same fate. My favorite part:

In the 2000s, AOL and Time Warner took the biggest and most notorious run at trying to make the Internet more like traditional media. The merger was a bet that unifying content and distribution might yield the kind of power that Paramount and NBC gained in the 1920s. They were not alone: Microsoft in the 1990s thought that, by owning a browser (Explorer), dial-in service (MSN), and some content (Slate), it could emerge as the NBC of the Internet era. Lastly, AT&T, the same firm that built the first radio network, keeps signaling plans to assert more control over “its pipes,” or even create its own competitor to the Internet. In 2000, when AT&T first announced its plans to enter the media market, a spokesman said: “We believe it’s very important to have control of the underlying network.”

Yet so far these would-be Zukors and NBCs have crashed and burned. Unlike radio or film, the structure of the Internet stoutly resists integration. AOL tried, in the 1990s, to keep its users in a “walled garden” of AOL content, but its users wanted the whole Internet, and finally AOL gave in. To make it after the merger, AOL-Time Warner needed to build a new garden with even higher walls–some way for AOL to discriminate in favor of Time Warner content. But AOL had no real power over its users, and pretty soon it did not have many of them left.

I think the monolithic media firms of the 20th century ultimately owed their size and success to economies of scale in the communication technologies of their day. For example, a single newspaper with a million readers is a lot cheaper to produce and distribute than ten newspapers with 100,000 readers each. And so the larger film studios, newspapers, broadcast networks, and so on were able to squeeze out smaller players. Once one newspaper in a given area began reaping the benefits of scale, it made it difficult for its competitors to turn a profit, and a lot of them went out of business or got acquired at firesale prices.

On the Internet, distributing content is so cheap that economies of scale in distribution just don’t matter. On a per-reader basis, my personal blog certainly costs more to operate than CNN. But the cost is so small that it’s simply not a significant factor in deciding whether to continue publishing it. Even if the larger sites capture the bulk of the readership and advertising revenue, that doesn’t preclude a “long tail” of small, often amateur sites with a wide variety of different content.

The Perpetual Peril of Open Platforms

Over at Techdirt, Mike Masnick did a great post a few weeks back on a theme I’ve written about before: peoples’ tendency to underestimate the robustness of open platforms.

Once people have a taste for what that openness allows, stuffing it back into a box is very difficult. Yes, it’s important to remain vigilant, and yes, people will always attempt to shut off that openness, citing all sorts of “dangers” and “bad things” that the openness allows. But, the overall benefits of the openness are recognized by many, many people — and the great thing about openness is that you really only need a small number of people who recognize its benefits to allow it to flourish.

Closed systems tend to look more elegant at first — and often they are much more elegant at first. But open systems adapt, change and grow at a much faster rate, and almost always overtake closed systems, over time. And, once they overtake the closed systems, almost nothing will allow them to go back. Even if it were possible to turn an open system like the web into a closed system, openness would almost surely sneak out again, via a new method by folks who recognized how dumb it was to close off that open system.

Predictions about the impending demise of open systems have been a staple of tech policy debates for at least a decade. Larry Lessig’s Code and Other Laws of Cyberspace is rightly remembered as a landmark work of tech policy scholarship for its insights about the interplay between “East Coast code” (law) and “West Coast code” (software). But people often forget that it also made some fairly specific predictions. Lessig thought that the needs of e-commerce would push the Internet toward a more centralized architecture: a McInternet that squeezed out free speech and online anonymity.

So far, at least, Lessig’s predictions have been wide of the mark. The Internet is still an open, decentralized system that allows robust anonymity and free speech. But the pessimistic predictions haven’t stopped. Most recently, Jonathan Zittrain wrote a book predicting the impending demise of the Internet’s “generativity,” this time driven by security concerns rather than commercialization.

It’s possible that these thinkers will be proven right in the coming years. But I think it’s more likely that these brilliant legal thinkers have been mislead by a kind of optical illusion created by the dynamics of the marketplace. The long-term trend has been a steady triumph for open standards: relatively open technologies like TCP/IP, HTTP, XML, PDF, Java, MP3, SMTP, BitTorrent, USB, and x86, and many others have become dominant in their respective domains. But at any given point in time, a disproportionate share of public discussion is focused on those sectors of the technology industry where open and closed platforms are competing head-to-head. After all, nobody wants to read news stories about, say, the fact that TCP/IP’s market share continues to be close to 100 percent and has no serious competition. And at least superficially, the competition between open and closed systems looks really lopsided: the proprietary options tend to be supported by large, deep-pocketed companies with large development teams, multi-million dollar advertising budgets, distribution deals with leading retailers, and so forth. It’s not surprising that people so frequently conclude that open standards are on the verge of getting crushed.

For example, Zittrain makes the iPhone a poster child for the flashy but non-generative devices he fears will come to dominate the market. And it’s easy to see the iPhone’s advantages. Apple’s widely-respected industrial design department created a beautiful product. Its software engineers created a truly revolutionary user interface. Apple and AT&T both have networks of retail stores with which to promote the iPhone, and Apple is spending millions of dollars airing television ads. On first glance, it looks like open technologies are on the ropes in the mobile marketplace.

But open technologies have a kind of secret weapon: the flexibility and power that comes from decentralization. The success of the iPhone is entirely dependent on Apple making good technical and business decisions, and building on top of proprietary platforms requires navigating complex licensing issues. In contrast, absolutely anyone can use and build on top of an open platform without asking anyone else for permission, and without worrying about legal problems down the line. That means that at any one time, you have a lot of different people trying a lot of different things on that open platform. In the long run, the creativity of millions of people will usually exceed that of a few hundred engineers at a single firm. As Mike says, opens systems adapt, change and grow at a much faster rate than closed ones.

Yet much of the progress of open systems tends to happen below the radar. The grassroots users of open platforms are far less likely to put out press releases or buy time for television ads. So often it’s only after an open technology has become firmly entrenched in its market—MySQL in the low-end database market, for example—that the mainstream press starts to take notice of it.

As a result, despite the clear trend toward open platforms in the past, it looks to many people like that pattern is going to stop and perhaps even be reversed. I think this illusion is particularly pronounced for folks who are getting their information second- or third-hand. If you’re judging the state of the technology industry from mainstream media stories, television ads, shelf space at Best Buy, etc, you’re likely not getting the whole story. It’s helpful to remember that open platforms have always looked like underdogs. They’re no more likely to be crushed today than they were in 1999, 1989, or 1979.

Satyam and the Inadvertent Web

Satyam is one of the handful of large companies who dominate the IT outsourcing market in India, A week ago today, B. Ramalinga Raju, the company chairman, confessed to a years-long accounting fraud. More than a billion dollars of cash the company claimed to have on hand, and the business success that putatively generated those dollars, now appear to have been fictitious.

There are many tech policy issues here. For one, frauds this massive in high tech environments are a challenge and opportunity for computer forensics. For another, though we can hope this situation is unique, it may turn out to be the tip of an iceberg. If Satyam turns out to be part of a pattern of lax oversight and exaggerated profits across India’s high tech sector, it might alter the way we look at high tech globalization, forcing us to revise downward our estimates of high tech’s benefits in India. (I suppose it could be construed as a silver lining that such news might also reveal America, and other western nations, to be more globally competitive in this arena than we had believed them to be.)

But my interest in the story is more personal. I met Mr. Raju in early 2007, when Satyam helped organize and sponsor a delegation of American journalists to India. (I served as Managing Editor of The American at the time.) India’s tech sector wanted good press in America, a desire perhaps increased by the fact that Democrats who were sometimes skeptical of free trade had just assumed control of the House. It was a wonderful trip—we were treated well at others’ expense and got to see, and learn about, the Indian tech sector and the breathtaking city of Hyderabad. I posted pictures of the trip on Flickr, mentioning “Satyam” in the description, showed the pics to a few friends, and moved on with life.

Then came last week’s news. Here’s the graph of traffic to my flickr account: That spike represents several thousand people suddenly viewing my pictures of Satyam’s pristine campus.

When I think about the digital “trails” I leave behind—the flickr, facebook and twitter ephemera that define me by implication—there are some easy presumptions about what the future will hold. Evidence of raw emotions, the unmediated anger, romantic infatuation, depression or exhilaration that life sometimes holds, should generally be kept out of the record, since the social norms that govern public display of such phenomena are still evolving. While others in their twenties may consider such material normal, it reflects a life-in-the-fishbowl style of conduct that older people can find untoward, a style that would years ago have counted as exhibitionistic or otherwise misguided.

I would never, however, have guessed that a business trip to a corporate office park might one day be a prominent part of my online persona. In this case, I happen to be perfectly comfortable with the result—but that feels like luck. A seemingly innocuous trace I leave online, that later becomes salient, might just as easily prove problematic for me, or for someone else. There seems to be a larger lesson here: That anything we leave online could, for reasons we can’t guess at today, turn out to be important later. The inadvertent web—the set of seemingly trivial web content that exists today and will turn out to be important—may turn out to be a powerful force in favor of limiting what we put online.