November 24, 2024

NJ court permits release of post-trial briefs in voting case

In 2009 the Superior Court of New Jersey, Law Division, held a trial on the legality of using paperless direct-recording electronic (DRE) voting machines. Plaintiffs in the suit argued that because it’s so easy to replace the software in a DRE with fraudulent software that cheats in elections, DRE voting systems do not guarantee the substantive right to vote (and to have one’s vote counted) required by the New Jersey constitution and New Jersey statutory law.

I described this trial in three articles last year: trial update, summary of plaintiffs’ witnesses’ testimony, and summary of defense witnesses’ testimony.

Normally in a lawsuit, the courtroom is open. The public can attend all legal proceedings. Additionally, plaintiffs are permitted to explain their case to the public by releasing their post-trial briefs (“proposed findings of fact” and “proposed conclusions of law”). But in this suit the Attorney General of New Jersey, representing the defendants in this case, argued that the courtroom be closed for parts of the proceedings, and asked the Court to keep all post-trial documents from the public, indefinitely.

More than a year after the trial ended, the Court finally held a hearing to determine whether post-trial documents should be kept from the public. The Attorney General’s office failed to even articulate a legal argument for keeping the briefs secret.

So, according to a Court Order of October 15, 2010, counsel for the plaintiffs (Professor Penny Venetis of Rutgers Law School aided by litigators from Patton Boggs LLP) are now free to show you the details of their legal argument.

The briefs are available here:
Plaintiffs’ Proposed Findings of Fact
Plaintiffs’ Proposed Conclusions of Law

I am now free to tell you all sorts of interesting things about my hands-on experiences with (supposedly) tamper-evident security seals. I published some preliminary findings in 2008. Over the next few weeks I’ll post a series of articles about the limitations of tamper-evident seals in securing elections.

Court permits release of unredacted report on AVC Advantage

In the summer of 2008 I led a team of computer scientists in examining the hardware and software of the Sequoia AVC Advantage voting machine. I did this as a pro-bono expert witness for the Plaintiffs in the New Jersey voting-machine lawsuit. We were subject to a Protective Order that, in essence, permitted publication of our findings but prohibited us from revealing any of Sequoia’s trade secrets.

At the end of August 2008, I delivered my expert report to the court, and prepared it for public release as a technical report with the rest of my team as coauthors. Before we could release that report, Sequoia intervened with the Court, claiming that we were revealing trade secrets. We had been very careful not to reveal trade secrets, so we disputed Sequoia’s claim. In October 2008 the Court ruled mostly in our favor on this issue, permitting us to release the report with some redactions,and reserving a decision on those redacted sections until later.

The hearing on those sections has finally arrived, completely vindicating our claim that the original report was within the parameters of the Protective Order. On October 5, 2010 Judge Linda Feinberg signed an order permitting me to release the original, unredacted expert report, which is now available here.

If you’re curious, you can look at paragraphs 19.8, 19.9, 21.3, and 21.5, as well as Appendices B through G, all of which were blacked out in our previously released report.

HTC Willfully Violates the GPL in T-Mobile's New G2 Android Phone

[UPDATE (Oct 14, 2010): HTC has released the source code. Evidently 90-120 days was not in fact necessary, given that they managed to do it 7 days after the phone’s official release. It is possible that the considerable pressure from the media, modders, kernel copyright holders, and other kernel hackers contributed to the apparently accelerated release.]

[UPDATE (Nov 10, 2010): The phone has been permanently rooted.]

Last week, the hottest new Android-based phone arrived on the doorstep of thousands of expectant T-Mobile customers. What didn’t arrive with the G2 was the source code that runs the heart of the device — a customized Linux kernel. Android has been hailed as an open platform in the midst of other highly locked-down systems, but as it makes its way out of the Google source repository and into devices this vision has repeatedly hit speedbumps. Last year, I blogged about one such issue, and to their credit Google sorted out a solution. This has ultimately been to everyone’s benefit, because the modified versions of the OS have routinely enabled software applications that the stock versions haven’t supported (not to mention improved reliability and speed).

When the G2 arrived, modders were eager to get to work. First, they had to overcome one of the common hurdles to getting anything installed — the “jailbreak”. Although the core operating system is open source, phone manufacturers and carriers have placed artificial restrictions on the ability to modify the basic system files. The motivations for doing so are mixed, but the effect is that hackers have to “jailbreak” or “root” the phone — essentially obtain super-user permissions. In 2009, the Copyright Office explicitly permitted such efforts when they are done for the purpose of enabling third-party programs to run on a phone.

G2 owners were excited when it appeared that an existing rooting technique worked on the G2, but were dismayed when their efforts were reversed every time the phone rebooted. T-Mobile passed the buck to HTC, the phone manufacturer:

The HTC software implementation on the G2 stores some components in read-only memory as a security measure to prevent key operating system software from becoming corrupted and rendering the device inoperable. There is a small subset of highly technical users who may want to modify and re-engineer their devices at the code level, known as “rooting,” but a side effect of HTC’s security measure is that these modifications are temporary and cannot be saved to permanent memory. As a result the original code is restored.

As it turned out, the internal memory chip included an option to make certain portions of memory read-only, which had the effect of silently discarding all changes upon reboot. However, it appears that this can be changed by sending the right series of commands to the chip. This effectively moved the rooting efforts into the complex domain of hardware hacking, with modders trying to figure out how to send these commands. Doing so involves writing some very challenging code that interacts with the open-source Linux kernel. The hackers haven’t yet succeeded (although they still could), largely because they are working in the dark. The relevant details about how the Linux kernel has been modified by HTC have not been disclosed. Reportedly, the company is replying to email queries with the following:

Thank you for contacting HTC Technical Assistance Center. HTC will typically publish on developer.htc.com the Kernel open source code for recently released devices as soon as possible. HTC will normally publish this within 90 to 120 days. This time frame is within the requirements of the open source community.

Perhaps HTC (and T-Mobile, distributor of the phone) should review the actual contents of the GNU Public License (v2), which stipulate the legal requirements for modifying and redistributing Linux. They state that you may only distribute derivative code if you “[a]ccompany it with the complete corresponding machine-readable source code.” Notably, there is no mention of a “grace period” or the like.

The importance of redistributing source code in a timely fashion goes beyond enabling phone rooting. It is the foundation of the “copyleft” regime of software licensing that has led to the flourishing of the open source software ecosystem. If every useful modification required waiting 90 to 120 days to be built upon, it would have taken eons to get to where we are today. It’s one thing for a company to choose to pursue the closed-source model and to start from scratch, but it’s another thing for it to profit from the goodwill of the open source community while imposing arbitrary and illegal restrictions on the code.

NPR Gets it Wrong on the Rutgers Tragedy: Cyberbullying is Unique

On Saturday, NPR’s Weekend All Things Considered ran a story by Elizabeth Blair called “Public Humiliation: It’s Not The Web, It’s Us” [transcript]. The story purported to examine the phenomenon of internet-mediated public humiliation in the context of last weeks tragic suicide of Tyler Clementi, a Rutgers student who was secretly filmed having a sexual encounter in his dorm room. The video was redistributed online by his classmates who created it. The story is heartbreaking to many locals who have friends or family at Rutgers, especially to those of us in the technology policy community who are again reminded that so-called “cyberbullying” can be a life-or-death policy issue.

Thus, I was disappointed that the All Things Considered piece decided to view the issue through the lens of “public humiliation,” opening with a sampling of reality TV clips and the claim that they are significantly parallel to this past week’s tragedy. This is just not the case, for reasons that are widely known to people who study online bullying. Reality TV is about participants voluntarily choosing to expose themselves in an artificial environment, and cyberbullying is about victims being attacked against their will in the real world and in ways that reverberate even longer and more deeply than traditional bullying. If Elizabeth Blair or her editors had done the most basic survey of the literature or experts, this would have been clear.

The oddest choice of interviewees was Tavia Nyong’o, a professor of performance studies at New York University. I disagree with his claim that the TV show Glee has something significant to say about the topic, but more disturbing is his statement about what we should conclude from the event:

“[My students and I] were talking about the misleading perception, because there’s been so much advances in visibility, there’s no cost to coming out anymore. There’s a kind of equal opportunity for giving offense and for public hazing and for humiliating. We should all be able to deal with this now because we’re all equally comfortable in our own skins. Tragically, what Rutgers reveals is that we’re not all equally comfortable in our own skins.

I’m not sure if it’s as obvious to everyone else why this is absolutely backward, but I was shocked. What Rutgers reveals is, yet again, that new technologies can facilitate new and more creative ways of being cruel to each other. What Rutgers reveals is that although television may give us ways to examine the dynamics of privacy and humiliation, we have a zone of personal privacy that still matters deeply. What Rutgers tells us is that cyberbullying has introduced new dynamics into the way that young people develop their identities and deal with hateful antagonism. Nothing about Glee or reality TV tells us that we shouldn’t be horrified when someone secretly records and distributes video of our sexual encounters. I’m “comfortable in my own skin” but I would be mortified if my sexual exploits were broadcast online. Giving Nyong’o the benefit of the doubt, perhaps his quote was taken out of context, or perhaps he’s just coming from a culture at NYU that differs radically from the experience of somewhere like middle America, but I don’t see how Blair or her editors thought that this way of constructing the piece was justifiable.

The name of the All Things Considered piece was, “It’s Not The Web, It’s Us.” The reality is that it’s both. Humiliation and bullying would of course exist regardless of the technology, but new communications technologies change the balance. For instance, the Pew Internet & American Life Project has observed how digital technologies are uniquely invasive, persistent, and distributable. Pew has also pointed out (as have many other experts) that computer-mediated communications can often have the effect of disinhibition — making attackers comfortable with doing what they would otherwise never do in direct person-to-person contact. The solution may have more to do with us than the technology, but our solutions need to be informed by an understanding of how new technologies alter the dynamic.

General Counsel's Role in Shoring Up Authentication Practices Used in Secure Communications

Business conducted over the Internet has benefited hugely from web-based encryption. Retail sales, banking transactions, and secure enterprise applications have all flourished because of the end-to-end protection offered by encrypted Internet communications. An encrypted communication, however, is only as secure as the process used to authenticate the parties doing the communicating. The major Internet browsers all currently use the Certificate Authority Trust Model to verify the identity of websites on behalf of end-users. (The Model involves third parties known as certificate authorities or “CAs” issuing digital certificates to browswers and website operators that enable the end-user’s computer to cryptographically prove that the same CA that issued a certificate to the browser also issued a certificate to the website). The CA Trust Model has recently come under fire by the information security community because of technical and institutional defects. Steve Schultze and Ed Felten, in previous posts here, have outlined the Model’s shortcomings and examined potential fixes. The vulernabilities are a big deal because of the potential for man-in-the-middle wiretap exploits as well as imposter website scams.

One of the core problems with the CA Trust Model is that there are just too many CAs. Although organizations can configure their browser platforms to trust fewer CAs, the problem of how to isolate trustworthy (and untrustworthy) CAs remains. A good review of trustworthiness would start with examining the civil and criminal track record of CAs and their principals; identifying the geographic locations where CAs are resident; determining in which legal jurisdictions the CAs operate; determining which governmental actors may be able to coerce the CA to issue bogus certificates, behind-the-scenes, for the purpose of carrying out surveillance; analyzing the loss limitation and indemnity provisions found in each CA’s Certification Practice Statement or CPS; and nailing down which CAs engage in cross-certification. These are just a few considerations that need to be considered from the standpoint of an organization as an end-user. There is an entirely separate legal analysis that must be done from the standpoint of an organization as a website operator and purchaser of SSL certificates (which will be the subject of a future post).

The bottom line is that the tasks involved with evaluating CAs are not ones that IT departments, acting alone, have sufficient resources to perform. I recently posted on my law firm’s blog a short analysis regarding why it’s time for General Counsel to weigh in on the authentication practices associated with secure communications. The post resonated in the legal blogosphere and was featured in write-ups on Law.Com’s web-magazine “Corporate Counsel” and 3 Geeks and a Law Blog. The sentiment seems to be that this is an area ripe for remedial measures and that a collaborative approach is in order which leverages the resources and expertise of General Counsel. Could it be that the deployment of the CA Trust Model is about to get a long overdue shakeup?