October 13, 2024

Sunlight on NASED ITA Reports

Short version: we now have gobs of voting system ITA reports, publicly available and hosted by the NSF ACCURATE e-voting center. As I explain below, ITA’s were the Independent Testing Authority laboratories that tested voting systems for many years.

Long version: Before the Election Assistance Commission (EAC) took over the testing and certification of voting systems under the Help America Vote Act (HAVA), this critical function was performed by volunteers. The National Association of State Election Directors (NASED) recognized a need for voting system testing and partnered with the Federal Election Commission (FEC) to establish a qualification program that would test systems as having met or exceeded the requirements of the 1990 and 2002 Voting System Standards.*

However, as I’ve lamented many, many times over the years, the input, output and intermediate work product of the NASED testing regime were completely secret, due to proprietary concerns on behalf of the manufacturers. Once a system completed testing, members of the public could see that an entry was made in a publicly-available spreadsheet listing the tested components and a NASED qualification number for the system. But the public was permitted no other insight into the NASED qualification regime.

Researchers were convinced from what evidence was available that the quality of the testing was highly inadequate and that the expertise didn’t exist within either the testing laboratories to perform adequate testing or the NASED technical committee to competently review the ultimate test reports submitted by the laboratories (called Independent Testing Authorities (ITA)). Naturally, when reports of problems started to crop-up, like the various Hursti vulnerabilities with Diebold memory cards, the NASED system scrambled to figure out what went wrong.

I know have more moderate views with respect to the NASED regime: sure, it was pretty bad and a lot of serious vulnerabilities slipped through the cracks, but I’m not yet convinced that just having the right people or a different process in place would have resulted in fewer problems in the field. To have fixed the NASED system would have required improvements on all fronts: the technology, the testing paradigms, the people involved and the testing and certification process.

The EAC has since taken over testing and certification. Their process is notable in its much higher level of openness and accountability; the test plans are published (previously claimed as proprietary by the testing labs), the test reports are published (previously claimed as proprietary by the vendors) and the process is specified in detail with a program manual, a laboratory manual, notices of clarification, etc.

This is all great and it helps to increase the transparency of the EAC certification program. But, what about the past? What about the testing that NASED did? Well, we don’t know much about it for a number of reasons, chief among them that we never saw any of the materials mentioned above that are now available in the new EAC system.

Through a fortunate FOIA request made of the EAC on behalf of election sleuth Susan Greenhalgh, we now have available a slew of ITA reports from one of the ITAs, Ciber.

The reports are available at the following location (hosted by our NSF ACCURATE e-voting center):

http://accurate-voting.org/docs/ita-reports/

These reports cover the Software ITA testing performed by the ITA Ciber for the following voting systems:

  • Automark AIMS 1.0.9
  • Diebold GEMS 1.18.19
  • Diebold GEMS 1.18.22
  • Diebold GEMS 1.18.24
  • Diebold AccuVote-TSx Model D
  • Diebold AccuVote-TSx Model D w/ AccuView Printer
  • Diebold Assure 1.0
  • Diebold Assure 1.1
  • Diebold Election Media Processor 4.6.2
  • Diebold Optical Scan Accumulator Adapter
  • Hart System 4.0
  • Hart System 4.1
  • Hart System 6.0
  • Hart System 6.2
  • Hart System 6.2.1

I’ll be looking at these in my leisure over coming weeks and pointing out interesting features of these reports and the associated correspondence included in the FOIA production.

*The distinction between certification and qualification, although vague, appears to be that under the NASED system, states did the ultimate certification of a voting system for fitness in future elections.

Usable security irony

I visited Usable Security (the web page for the 2007 Usability Security workshop) today to look up a reference, except the link I followed was actually the SSL version of the page. Guess what?

Secure Connection Failed

usablesecurity.org uses an invalid security certificate.
The certificate expired on 12/29/08 12:21 AM.

(Error code: sec_error_expired_certificate)

  • This could be a problem with the server’s configuration, or it could be someone trying to impersonate the server.
  • If you have connected to this server successfully in the past, the error may be temporary, and you can try again later.

How many other web sites out there have the same problem? Using SSL, all the time, is clearly a good thing from a security perspective. There’s a performance issue, of course, but then there’s this usability problem from the server admin’s perspective.

Acceptance rates at security conferences

How competitive are security research conferences? Several people have been tracking this information. Mihai Christodorescu has a nice chart of acceptance and submission rates over time. The most recent data point we have is the 2009 Usenix Security Symposium, which accepted 26 of 176 submissions (a 14.8% acceptance ratio, consistent with recent years). Acceptance rates like that, at top security conferences, are now pretty much the norm.

With its deadline one week ago, ACM CCS 2009 got 317 submissions this year (up from 274 last year, and approx. 300 the year before) and ESORICS 2009, with a submission deadline last Friday night, got 222 submissions (up from about 170 last year).

Think about that: right now there are over 500 research manuscripts in the field of computer security fighting it out, and maybe 15-20% of those will get accepted. (And that’s not counting research in cryptography, or the security-relevant papers that regularly appear in the literature on operating systems, programming languages, networking, and other fields.) Ten years ago, when I first began as an assistant professor, there would be half as many papers submitted. At the time, I grumbled that we had too many security conferences and that the quality of the proceedings suffered. Well, that problem seems mostly resolved, except rather than having half as many conferences, we now have a research community that’s apparently twice as large. I suppose that’s a good thing, although there are several structural problems that we, the academic security community, really need to address.

  • What are we supposed to do with the papers that are rejected, resubmitted, rejected again, and so on? Clearly, some of this work has value and never gets seen. Should we make greater use of the arXiv.org pre-print service? There’s a crypto and computer security section, but it’s not heavily used. Alternatively, we could join on on the IACR Cryptology ePrint Archive or create our own.
  • Should we try to make the conference reviewing systems more integrated across conferences, such that PC comments from one conference show up in a subsequent conference, and the subsequent PC can see both drafts of the paper? This would make conference reviewing somewhat more like journal reviewing, providing a measure of consistency from one conference to the next.
  • Low acceptance ratios don’t necessarily achieve higher quality proceedings. There’s a distinctive problem that occurs when a conference has a huge PC and only three of them review any given paper. Great papers still get in and garbage papers are still rejected, but the outcomes for papers “on the bubble” becomes more volatile, depending on whether those papers get the right reviewers. Asking PC members to do more reviews is just going to lower the quality of the reviews or discourage people from accepting positions on PCs. Adding additional PC members could help, but it also can be unwieldy to manage a large PC, and there will be even more volatility.
  • Do we need another major annual computer security conference? Should more workshops be willing to take conference-length submissions? Or should our conferences raise their acceptance rates up to something like 25%, even if that means compressed presentations and the end of printed proceedings? How much “good” work is out there, if only there was a venue in which to print it?

About the only one of these ideas I don’t like is adding another top-level security conference. Otherwise, we could well do all-of-the-above, and that would be a good thing. I’m particularly curious if arbitrarily increasing the acceptance rates would resolve some of the volatility issues on the bubble. I think I’d rather that our conferences err on the side of taking the occasional bad/broken/flawed paper rather than rejecting the occasional good-but-misunderstood paper.

Maybe we just need to harness the power of our graduate students. When you give a grad student a paper to review, they treat it like a treasure and write a detailed review, even if they may not be the greatest expert in the field. Conversely, when you give an overworked professor a paper to review, they blast through it, because they don’t have the time to spend a full day on any given paper. Well, it’s not like our grad students have anything better to be doing. But does the additional time they can spend per paper make up for the relative lack of experience and perspective? Can they make good accept-or-reject judgements for papers on the bubble?

For additional thoughts on this topic, check out Matt Welsh’s thoughts on scaling systems conferences. He argues that there’s a real disparity between the top programs / labs and everybody else and that it’s worthwhile to take steps to fix this. (I’ll argue that security conferences don’t seem to have this particular problem.) He also points out what I think is the deeper problem, which is that hotshot grad students must get themselves a long list of publications to have a crack at a decent faculty job. This was emphatically not the case ten years ago.

See also, Birman and Schneider’s CACM article (behind a paywall, unless your university has a site license). They argue that the focus on short, incremental results is harming our field’s ability to have impact. They suggest improving the standing of journals in the tenure game and they suggest disincentivizing people from submitting junk / preliminary papers by creating something of a short-cut reject that gets little or no feedback and also, by virtue of the conferences not being blind-review, creates the possibility that a rejected paper could harm the submitter’s reputation.

Chinese Internet Censorship: See It For Yourself

You probably know already that the Chinese government censors Internet traffic. But you might not have known that you can experience this censorship yourself. Here’s how:

(1) Open up another browser window or tab, so you can browse without losing this page.

(2) In the other window, browse to baidu.com. This is a search engine located in China.

(3) Search for an innocuous term such as “freedom to tinker”. You’ll see a list of search results, sent back by Baidu’s servers in China.

(4) Now return to the main page of baidu.com, and search for “Falun Gong”. [Falun Gong is a dissident religious group that is banned in China.]

(5) At this point your browser will report an error — it might say that the connection was interrupted or that the page could not be loaded. What really happened is that the Great Firewall of China saw your Internet packets, containing the forbidden term “Falun Gong”, and responded by disrupting your connection to Baidu.

(6) Now try to go back to the Baidu home page. You’ll find that this connection is disrupted too. Just a minute ago, you could visit the Baidu page with no trouble, but now you’re blocked. The Great Firewall is now cutting you off from Baidu, because you searched for Falun Gong.

(7) After a few minutes, you’ll be allowed to connect to Baidu again, and you can do more experiments.

(Reportedly, users in China see different behavior. When they search for “Falun Gong” on Baidu, the connection isn’t blocked. Instead, they see “sanitized” search results, containing only pages that criticize Falun Gong.)

If you do try more experiments, feel free to report your results in the comments.

Stimulus transparency and the states

Yesterday, I testified at a field hearing of the U.S. House Committee on Oversight and Government Reform. The hearing title was The American Recovery and Reinvestment Act of 2009: The Role of State and Local Governments.

My written testimony addressed plans to put stimulus data on the Internet, primarily at Recovery.gov. There have been promising signs, but important questions remain open, particularly about stimulus funds that are set to flow through the states. I was reacting primarily to the most recent round of stimulus-related guidance from the Office of Management and Budget (dated April 3).

Based on the probing questions about Recovery.gov that were asked by members from both parties, I’m optimistic that Congressional oversight will be a powerful force to encourage progress toward greater online transparency.