April 21, 2014

avatar

Building a better CA infrastructure

As several Tor project authors, Ben Adida and many others have written, our certificate authority infrastructure has the flaw that any one CA, anywhere on the planet, can issue a certificate for any web site, anywhere else on the planet. This was tolerable when the only game in town was VeriSign, but now that’s just untenable. So what solutions are available?

First, some non-solutions: Extended validation certs do nothing useful. Will users be properly trained to look for the extra changes in browser behavior as to scream when they’re absent via a normal cert? Fat chance. Similarly, certificate revocation lists buy you nothing if you can’t actually download them (a notable issue if you’re stuck behind the firewall of somebody who wants to attack you).

A straightforward idea is to track the certs you see over time and generate a prominent warning if you see something anomalous. This is available as a fully-functioning Firefox extension, Certificate Patrol. This should be built into every browser.

In addition to your first-hand personal observations, why not leverage other resources on the network to make their own observations? For example, while Google is crawling the web, it can easily save SSL/TLS certificates when it sees them, and browsers could use a real-time API much like Google SafeBrowsing. A research group at CMU has already built something like this, which they call a network notary. In essence, you can have multiple network services, running from different vantage points in the network, all telling you whether the cryptographic credentials you got match what others are seeing. Of course, if you’re stuck behind an attacker’s firewall, the attacker will similarly filter out all these sites.

UPDATE: Google is now doing almost exactly what I suggested.

There are a variety of other proposals out there, notably trying to leverage DNSSEC to enhance or supplant the need for SSL/TLS certificates. Since DNSSEC provides more control over your DNS records, it also provides more control over who can issue SSL/TLS certificates for your web site. If and when DNSSEC becomes universally supported, this would be a bit harder for attacker firewalls to filter without breaking everything, so I certainly hope this takes off.

Let’s say that future browsers properly use all of these tricks and can unquestionably determine for you, with perfect accuracy, when you’re getting a bogus connection. Your browser will display an impressive error dialog and refuses to load the web site. Is that sufficient? This will certainly break all the hotel WiFi systems that want to redirect you to an internal site where they can charge you to access the network. (Arguably, this sort of functionality belongs elsewhere in the software stack, such as through IEEE 802.21, notably used to connect AT&T iPhones to the WiFi service at Starbucks.) Beyond that, though, should the browser just steadfastly refuse to allow the connection? I’ve been to at least one organization whose internal WiFi network insists that it proxy all of your https sessions and, in fact, issues fabricated certificates that you’re expected to configure your browser to trust. We need to support that sort of thing when it’s required, but again, it would perhaps best be supported by some kind of side-channel protocol extension, not by doing a deliberate MITM attack on the crypto protocol.

Corner cases aside, what if you’re truly in a hostile environment and your browser has genuinely detected a network adversary? Should the browser refuse the connection, or should there be some other option? And if so, what would that be? Should the browser perhaps allow the connection (with much gnashing of teeth and throbbing red borders on the window)? Should previous cookies and saved state be hidden away? Should web sites like Gmail and Facebook allow users to have two separate passwords, one for “genuine” login and a separate one for “Yes, I’m in a hostile location, but I need to send and receive email in a limited but still useful fashion?”

[Editor's note: you may also be interested in the many prior posts on this topic by Freedom to Tinker contributors: 1, 2, 3, 4, 5, 6, 7, 8 -- as well as the "Emerging Threats to Online Trust: The Role of Public Policy and Browser Certificates" event that CITP hosted in DC last year with policymakers, industry, and activists.]

avatar

The case of Prof. Cronon and the FOIA requests for his private emails

Prof. William Cronon, from the University of Wisconsin, started a blog, Scholar as Citizen, wherein he critiqued Republican policies in the State of Wisconsin and elsewhere. I’m going to skip the politics and focus on the fact that the Republicans used Wisconsin’s FOIA mechanism to ask for a wide variety of his emails and they’re likely to get them.

Cronon believes this is a fishing expedition to find material to discredit him and he’s probably correct. He also notes that he scrupulously segregates his non-work-related emails into a private account (perhaps Gmail) while doing his work-related email using his wisc.edu address, as well he should.

What I find fascinating about the Cronon case is that it highlights a threat model for email privacy that doesn’t get much discussion among my professional peers. Sophisticated cryptographic mechanisms don’t protect emails against a FOIA request (or, for that matter, a sufficiently motivated systems administrator).

When I’ve worked in the past with lawyers when our communications weren’t privileged (i.e., opposing counsel would eventually receive every email we ever exchanged), we instead exchanged emails of the form “are you available for a phone call at 2pm?” and not much else. This is annoying when working on a lawsuit and it would completely grind to a halt the regular business of a modern adacemic.

While Cronon doesn’t want to abandon his wisc.edu address, consider the case that he could just forward his email to Gmail and have the university system delete its local copy (which is certainly an option for me with my rice.edu email). At that point, it becomes an interesting legal question of whether a FOIA request can compel production of content from his “private” email service. (And, future lawmaking could well explicitly extend the reach of FOIA to private accounts, particularly when many well-known politicians and others subject to FOIA deliberately conduct their professional business on private servers.)

Here’s another thing to ponder: When I send email from Gmail, it happily forges my rice.edu address in the from line. This allows me to use Gmail without most of the people who correspond with me ever knowing or caring that I’m using Gmail. By blurring the lines between my rice.edu and gmail.com email, am I also blurring the boundary of legal requests to discover my email? Since Rice is a private university, there are presumably no FOIA issues for me, but would it be any different for Prof. Cronon? Could or should present or future FOIA laws compel you to produce content from your “private” email service when you conflate it with your “professional” email address?

Or, leaving FOIA behind for the minute, could or should my employer have any additional privilege to look into my Gmail account when I’m using it for all of my professional emails and forging a rice.edu mail header?

One last alternative: Let’s say I appended some text like this at the bottom on my email:

My personal email is dwallach at gmail.com and my professional email is dwallach at rice.edu. Please use the former for personal matters and the latter for professional matters.

If I go to explicit lengths to separate the two email addresses, using separate services, and making it abundantly clear to all my correspondents which address serves which purpose, could or should that make for a legally significant difference in how FOIA treats my emails?

avatar

Do photo IDs help prevent vote fraud?

In many states, an ID is required to vote. The ostensible purpose is to prevent people from casting a ballot for someone else – dead or alive. Historically, it was also used to prevent poor and minority voters, who are less likely to have government IDs, from voting.

No one would (publicly) admit to the second goal today, so the first is always the declared purpose. But does it work?

In my experience as a pollworker in Virginia, the answer is clearly “no”. There are two basic problems – the rules for acceptable IDs are so broad (so as to avoid disenfranchisement) as to be useless, and pollworkers are given no training as to how to verify an ID.

Let’s start with what Virginia law says. The Code of Virginia 24.2-643 reads in part:

An officer of election shall ask the voter for his full name and current residence address and repeat, in a voice audible to party and candidate representatives present, the full name and address stated by the voter. The officer shall ask the voter to present any one of the following forms of identification: his Commonwealth of Virginia voter registration card, his social security card, his valid Virginia driver’s license, or any other identification card issued by a government agency of the Commonwealth, one of its political subdivisions, or the United States; or any valid employee identification card containing a photograph of the voter and issued by an employer of the voter in the ordinary course of the employer’s business. If the voter’s name is found on the pollbook, if he presents one of the forms of identification listed above, if he is qualified to vote in the election, and if no objection is made, […]

Let’s go through these one at a time.

  • A voter registration card has no photo or signature, and little other identifying information, there’s no way to validate it. Since voters don’t sign the pollbook in Virginia (as they do in some other states), there’s no signature to compare to even if it did have a signature. And since the voter card is just a piece of paper with no watermark, it’s easy to fabricate on a laser printer.
  • A Social Security Card (aside from the privacy issues of sharing the voter’s SSN with the pollworker) is usually marked “not for identification”. And it has no photo or address.
  • A Virginia driver’s license has enough information for identification (i.e., a photo and signature, as well as the voter’s address).
  • Other Virginia, locality, or Federal ID. Sounds good, but I have no clue what all the different possible IDs that fall into this category look like, so I have no idea as a pollworker how to tell whether they’re legitimate or not. (On the positive side, a passport is allowed by this clause – but it doesn’t have an address.)
  • Employee ID card. This is the real kicker. There are probably ten thousand employers in my county. Many of them don’t even follow a single standard for employee IDs (my own employer had several versions until earlier this year, when anyone with an old ID was “upgraded”). I don’t know the name of every employer, much less how to distinguish a valid ID from an invalid one. If the voter’s name and photo are on the card, along with some company name or logo, that’s probably good enough. Any address on the card is going to be of the employer, not the voter.

So if I want to commit fraud (a felony) and vote for someone else (living or dead), how hard is it? Simple: create a laminated ID with company name “Bob’s Plumbing Supply” and the name of the voter to be impersonated, memorize the victim’s address, and that’s all it takes.

Virginia law also allows the voter who doesn’t have an ID with him/her to sign an affidavit that they are who they say they are. Falsifying the affidavit is a felony, but it really doesn’t matter if you’re already committing a felony by voting for someone else.

Now let’s say the laws were tightened to require a driver’s license, military ID, or a passport, and no others (and eliminate the affidavit option). Then at least it would be possible to train pollworkers what an ID looks like. But there are still two problems. First, the law says the voter must present the ID, but it never says what the pollworker must do with it. And second, the pollworkers never receive any training in how to verify an ID – a bouncer at a bar gets more training in IDs than a pollworker safeguarding democracy. In Virginia, when renewing a driver’s license the person has the choice to continue to use the previous picture, or to wait in line a couple hours at a DMV site to get a new picture. Not surprisingly, most voters have old pictures. Mine is ten years old, and dates from when I had a full head of hair and a beard, both of which have long since disappeared. Will a pollworker be able to match the IDs? Probably not – but since no one ever tries, that doesn’t matter. And passports are good for 10 years, so the odds are that picture will be quite old too. I’m really bad at matching faces, so when I’m working as a pollworker I don’t even try.

There are some positive things about requiring an ID. Most voters present their drivers license, frequently without even being asked. If the name is complex or the voter has a heavy accent or the room is particularly noisy, or the pollworker is hard of hearing (or not paying close attention), having the written name is a help. But that’s about it.

So what can we learn from this? Photo ID laws for voting, especially those that allow for company ID cards, are almost useless for preventing voting fraud. It’s the threat of felony prosecution, combined with the fact that the vast majority of voters are honest, that prevents vote fraud… not the requirement for a photo ID.

avatar

Web Browsers and Comodo Disclose A Successful Certificate Authority Attack, Perhaps From Iran

Today, the public learned of a previously undisclosed compromise of a trusted Certificate Authority — one of the entities that issues certificates attesting to the identity of “secure” web sites. Last week, Comodo quietly issued a command via its certificate revocation servers designed to tell browsers to no longer accept 9 certificates. This is fairly standard practice, and certificates are occasionally revoked for a variety of reasons. What was unique about this case is that it was followed by very rapid updates by several browser manufacturers (Chrome, Firefox, and Internet Explorer) that go above and beyond the normal revocation process and hard-code the certificates into a “do not trust” list. Mozilla went so far as to force this update in as the final change to their source code before shipping their major new release, Firefox 4, yesterday.

This implied that the certificates were likely malicious, and may even been used by a third-party to impersonate secure sites. Here at Freedom to Tinker we have explored the several ways in which the current browser security system is prone to vulnerabilities [1, 2, 3, 4, 5, 6]

Clearly, something exceptional happened behind the scenes. Security hacker Jacob Appelbaum did some fantastic detective work using the EFF’s SSL Observatory data and discovered that all of the certificates in question originated from Comodo — perhaps from one of the many affiliated companies that issues certificates under Comodo’s authority via their “Registration Authority” (RA) program. Evidently, someone had figured out how to successfully attack Comodo or one of their RAs, or had colluded with them in getting some invalid certs.

Appelbaum’s pressure helped motivate a statement from Mozilla [and a follow-up] and a statement from Microsoft that gave a little bit more detail. This afternoon, Comodo released more details about the incident, including the domains for which rogue certificates were issued: mail.google.com, www.google.com, login.yahoo.com (3 certs), login.skype.com, addons.mozilla.org, login.live.com, and “global trustee”. Comodo noted:

“The attack came from several IP addresses, but mainly from Iran.”

and

“this was likely to be a state-driven attack”

[Update: Someone claiming to be the hacker has posted a manifesto. At least one security researcher finds the claim to be credible.]

It is clear that the domains in question are among the most attractive targets for someone who wants to surveil the personal communications of many people online by inserting themselves as a “man in the middle.” I don’t have any deep insights on Comodo’s analysis of the attack’s origins, but it seems plausible. (I should note that although Comodo claims that only one of the certificates was “seen live on the Internet”, their mechanism for detecting this relies on the attacker not taking some basic precautions that would be well within the means and expertise of someone executing this attack.) [update: Jacob Appelbaum also noted this, and has explained the technical details]

What does this tell us about the current security model for web browsing? This instance highlights a few issues:

  • Too many entities have CA powers: As the SSL Observatory project helped demonstrate, there are thousands of entities in the world that have the ability to issue certificates. Some of these are trusted directly by browsers, and others inherit their authority. We don’t even know who many of them are, because such delegation of authority — either via “subordinate certificates” or via “registration authorities” — is not publicly disclosed. The more of these entities exist, the more vulnerabilities exist.
  • The current system does not limit damage: Any entity that can issue a certificate can issue a certificate for any domain in the world. That means that a vulnerability at one point is a vulnerability for all.
  • Governments are a threat: All the major web browsers currently trust many government agencies as Certificate Authorities. This often includes places like Tunisia, Turkey, UAE, and China, which some argue are jurisdictions hostile to free speech. Hardware products exist and are marketed explicitly for government surveillance via a “man in the middle” attack.
  • Comodo in particular has a bad track record with their RA program: The structure of “Registration Authorities” has led to poor or nonexistant validation in the past, but Mozilla and the other browsers have so far refused to take any action to remove Comodo or put them on probation.
  • We need to step up efforts on a fix: Obviously the current state of affairs is not ideal. As Appelbaum notes, efforts like DANE, CAA, HASTLS, and Monkeysphere deserve our attention.

[Update: Jacob Appelbaum has posted his response to the Comodo announcement, criticizing some aspects of their response and the browsers.]

[Update: A few more details are revealed in this Comodo blog post, including the fact that "an attacker obtained the username and password of a Comodo Trusted Partner in Southern Europe."]

[Update: Mozilla has made Appelbaum's bug report publicly visible, along with the back-and-forth between him and Mozilla before the situation was made public. There are also some interesting details in the Mozilla bug report that tracked the patch for the certificate blacklist. There is yet another bug that contains the actual certificates that were issued. Discussion about what Mozilla should do in further response to this incident is proceeding in the newsgroup dev.mozilla.security.policy.]

[Update: I talked about this issue on Marketplace Tech Report.]

You may also be interested in an October 22, 2010 event that we hosted on the policy and technology issues related to online trust (streaming video available):


avatar

Google Should Stand up for Fair Use in Books Fight

On Tuesday Judge Denny Chen rejected a proposed settlement in the Google Book Search case. My write-up for Ars Technica is here.

The question everyone is asking is what comes next. The conventional wisdom seems to be that the parties will go back to the bargaining table and hammer out a third iteration of the settlement. It’s also possible that the parties will try to appeal the rejection of the current settlement. Still, in case anyone at Google is reading this, I’d like to make a pitch for the third option: litigate!

Google has long been at the forefront of efforts to shape copyright law in ways that encourage innovation. When the authors and publishers first sued Google back in 2005, I was quick to defend the scanning of books under copyright’s fair use doctrine. And I still think that position is correct.

Unfortunately, in 2008 Google saw an opportunity to make a separate truce with the publishing industry that placed Google at the center of the book business and left everyone else out in the cold. Because of the peculiarities of class action law, the settlement would have given Google the legal right to use hundreds of thousands of “orphan” works without actually getting permission from their copyright holders. Competitors who wanted the same deal would have had no realistic way of doing so. Googlers are a smart bunch, and so they took what was obviously a good deal for them even though it was bad for fair use and online innovation.

Now the deal is no longer on the table, and it’s not clear if it can be salvaged. Judge Chin suggested that he might approve a new, “opt-in” settlement. But switching to an opt-in rule would undermine the very thing that made the deal so appealing to Google in the first place: the freedom to incorporate works whose copyright status was unclear. Take that away, and it’s not clear that Google Book Search can exist at all.

Moreover, I think the failure of the settlement may strengthen Google’s fair use argument. Fair use exists as a kind of safety valve for the copyright system, to ensure that it does not damage free speech, innovation, and other values. Although formally speaking judges are supposed to run through the famous four factor test to determine what counts as a fair use, in practice an important factor is whether the judge perceives the defendant as having acted in good faith. Google has now spent three years looking for a way to build its Book Search project using something other than fair use, and come up empty. This underscores the stakes of the fair use fight: if Judge Chen ruled against Google’s fair use argument, it would mean that it was effectively impossible to build a book search engine as comprehensive as the one Google has built. That outcome doesn’t seem consistent with the constitution’s command that copyright promote the progress of science and the useful arts.

In any event, Google may not have much choice. If it signs an “opt-in” settlement with the Author’s Guild and the Association of American Publishers, it’s likely to face a fresh round of lawsuits from other copyright holders who aren’t members of those organizations — and they might not be as willing to settle for a token sum. So if Google thinks its fair use argument is a winner, it might as well test it now before it’s paid out any settlement money. And if it’s not, then this business might be too expensive for Google to be in at all.

avatar

Seals on NJ voting machines, as of 2011

Part of a multipart series starting here.

During the NJ voting-machines trial, plaintiffs’ expert witness Roger Johnston testified that the State’s attempt to secure its AVC Advantage voting machines was completely ineffective: the seals were ill-chosen, the all-important seal use protocol was entirely missing, and anyway the physical design of this voting machine makes it practically impossible to secure using seals.

Of course, the plaintiffs’ case covered many things other than security seals. And even if the seals could work perfectly, how could citizens know that fraudulent vote-miscounting software hadn’t been perfectly sealed into the voting machine?

Still, it was evident from Judge Linda Feinberg’s ruling, in her Opinion of February 2010, that she took very seriously Dr. Johnston’s testimony about the importance of a seal use protocol. She ordered,


4. SEALS AND SEAL-USE PROTOCOLS (REQUIRED)

For a system of tamper-evident seals to provide effective protection seals must be consistently installed, they must be truly tamper-evident, and they must be consistently inspected. While the new seals proposed by the State will provide enhanced security and protection against intruders, it is critical for the State to develop a seal protocol, in writing, and to provide appropriate training for individuals charged with seal inspection. Without a seal-use protocol, use of tamper-evident seals significantly reduces their effectiveness.

The court directs the State to develop a seal-use protocol. This shall include a training curriculum and standardized procedures for the recording of serial numbers and maintenance of appropriate serial number records.

(With regard to other issues, she ordered improvements to the security of computers used to prepare ballot definitions and aggregate vote totals; criminal background checks for workers who maintain and transport voting machines; better security for voting machines when they are stored at polling places before elections; that election computers not be connected to the Internet; and better training for election workers in “protocols for the chain of custody and maintenance of election records.”)

Judge Feinberg gave the State until July 2010 to come up with a seal use protocol. The State missed this deadline, but upon being reminded of the deadline, they submitted to the Court some woefully inadequate sketches for such a protocol. The Court rejected these sketches, and told them to come up with a real protocol. In September 2010 they tried again with a lengthier document that was still short on specifics, and the Court again found this inadequate. In October 2010 they tried again, asking for another 12-month extension, which the judge granted. In addition they proposed some new seal protocols, but asked the Court not to show them to Plaintiffs’ experts–which is most unusual in the tradition of Anglo-American law, where the Court is supposed to hear from both sides before a finding of fact. By March 2011, Judge Feinberg has not yet decided whether the State has a seal use protocol in compliance with her Order.

I’ve been observing the New Jersey Division of Elections quite closely over the past few years, as this litigation has dragged on. In some things they do a pretty good job: they are competent at voter registration, and they do maintain enough polling places so that the lines don’t get long—and these are basics of election administration that we should not take for granted. But with regard to the security of their voting machines, they just don’t get it. These direct-recording electronic voting machines are inherently insecure, and in the period 2008-2010 they have applied no fewer than six different ad-hoc “patches” to try to secure these machines: four different seal regimes, followed by three different documents claiming to be seal use protocols.

Is the New Jersey Division of Elections deliberately stalling, preserving insecure elections by dragging this case out, always proposing too little, too late and always requesting another extension? Or do they just not care, so through their lack of attention they always propose too little, too late and always request another extension? Even if the Division of Elections could come up with a seal use protocol that the Court would accept, how could we believe that these Keystone Kops could have the follow-through, the “security culture”, to execute such a protocol in the decades to come?

These voting machines are inherently insecure. The State claims they could be made secure with good seals. That’s not true: even with perfect seals and a perfectly executed seal-use protocol, there is the danger of locking fraudulent software securely into the voting machine! But even on its own flawed terms–trying to solve the problem with seals insead of with an inherently auditable technology–the State is failing to execute.

avatar

Internet Voting in Union Elections?

The U.S. Department of Labor (DOL) recently asked for public comment on a fascinating issue: what kind of guidelines should they give unions that want to use “electronic voting” to elect their officers? (Curiously, they defined electronic voting broadly to include computerized (DRE) voting systems, vote-by-phone systems and internet voting systems.)

As a technology policy researcher with the NSF ACCURATE e-voting center, I figured we should have good advice for DOL.

(If you need a quick primer on security issues in e-voting, GMU’s Jerry Brito has just posted an episode of his Surprisingly Free podcast where he and I work through a number of basic issues in e-voting and security. I’d suggest you check out Jerry’s podcast regularly as he gets great guests (like a podcast with CITP’s own Tim Lee) and really digs deep into the issues while keeping it at an understandable level.)

The DOL issued a Request for Information (PDF) that asked a series of questions, beginning with the very basic, “Should we issue e-voting guidelines at all?” The questions go on to ask about the necessity of voter-verified paper audit trails (VVPATs), observability, meaningful recounts, ballot secrecy, preventing flawed and/or malicious software, logging, insider threats, voter intimidation, phishing, spoofing, denial-of-service and recovering from malfunctions.

Whew. The DOL clearly wanted a “brain dump” from computer security and the voting technology communities!

It turns out that labor elections and government elections aren’t as different as I originally thought. The controlling statute for union elections (the LMRDA) and caselaw* that has developed over the years require strict ballot secrecy–such that any technology that could link a voter and their ballot is not allowed–both during voting and in any post-election process. The one major difference is that there isn’t a body of election law and regulation on top of which unions and the DOL can run their elections; for example, election laws frequently disallow campaigning or photography within a certain distance of an official polling place while that would be hard to prohibit in union elections.

After a considerable amount of wrangling and writing, ACCURATE submitted a comment, find it here in PDF. The essential points we make are pretty straightforward: 1) don’t allow internet voting from unsupervised, uncontrolled computing devices for any election that requires high integrity; and, 2) only elections that use voter-verified paper records (VVPRs) subject to an audit process that uses those records to audit the reported election outcome can avoid the various types of threats that DOL is concerned with. The idea is simple: VVPRs are independent of the software and hardware of the voting system, so it doesn’t matter how bad those aspects are as long as there is a robust parallel process that can check the result. Of course, VVPRs are no panacea: they must be carefully stored, secured and transported and ACCURATE’s HCI researchers have shown that it’s very hard to get voters to consistently check them for accuracy. However, those problems are much more tractable than, say, removing all the malware and spyware from hundreds of thousands of voter PCs and mobile devices.

I must say I was a bit surprised to see the other sets of comments submitted, mostly by voting system vendors and union organizations, but also the Electronic Privacy Information Center (EPIC). ACCURATE and EPIC seem to be lone voices in this process “porting” what we’ve learned about the difficulties of running secure civic elections to the labor sphere. Many of the unions talked about how they must have forms of electronic, phone and internet voting as their constituencies are spread far and wide, can’t make it to polling places and are concerned with environmental impacts of paper and more traditional voting methods. Of course, we would counter that accommodations can be made for most of these concerns and still not fundamentally undermine the integrity of union elections.

Both unions and vendors used an unfortunate rhetorical tactic when talking about security properties of these systems: “We’ve run x hundreds of elections using this kind of technology and have never had a problem/no one has ever complained about fraud.” Unfortunately, that’s not how security works. Akin to adversarial processes like financial audits, security isn’t something that you can base predictions of future performance on past results. That is, the SEC doesn’t say to companies that their past 10 years of financials have been in order, so take a few years off. No, security requires careful design, affirmative effort and active auditing to assure that a system doe not violate the properties it claims.

There’s a lot more in our comment, and I’d be more than happy to respond to comments if you have questions.

* Check out the “Court Cases” section of the Federal Register notice linked to above.

avatar

A Legacy at Risk: How the new Ministry of Culture in Brazil reversed its digital agenda

Former Brazilian president Luiz Inacio Lula da Silva has become a prominent figure in the political world. When he completed his second and last term last December, 87% of Brazilians approved his government, an unprecedented high rate. So it is not surprising that his successor Dilma Roussef, the first woman elected president in Brazil, took office with his strong support and the promise of continuity.

However, disappointment about that promise is growing, at least in regard to one of Lula’s landmark policies: his support to the so-called “digital culture” policies. “Digital Culture” is the expression Brazilians use to refer to a broad agenda. It derives from the principle that technology is a crucial tool for cultural policies, especially because it allows the democratization of access, and the production and dissemination of cultural artifacts. It includes also the reform of copyright, especially because the Brazilian copyright has become notoriously restrictive, preventing consumers from uploading their CD´s into an iPod, a library from digitizing an old book for preservation, or a professor from using excerpts of a film in classroom. Finally, the digital culture agenda also includes the support to open licensing models, such as free software or Creative Commons.

These policies were successfully deployed by Gilberto Gil, a popular musician appointed Minister of Culture in 2003. He was profiled as early as 2004 by Wired Magazine as a champion of free culture and free software. Mr. Gil became such a popular politician in the country that some started calling him “the Lula of Lula”, in reference to his high popularity and progressive policies, within an already popular and progressive government.

Mr. Gil’s policies were continued by his successor (and former chief of staff) Juca Ferreira, who was appointed Minister of Culture in 2008 after Gil resigned to devote more time to his music career. One of the most successful policies implemented by Gil/Juca was the creation of the so-called “cultural hotspots”. The program provides resources to grassroots cultural initiatives and organizations to acquire multimedia production equipment and broadband Internet. More than 4,000 hotspots were created, spread over more than 1,000 cities in the country. Many of them in poor areas, rural communities, or favelas (shanty towns).

Mr. Gil described the idea of the hotspots as an “anthropological tao-in”, in reference to the Chinese therapeutic massage that when applied to the right spots of the body, awakens its internal energy. According to his view, with the right incentives, it was possible to energize and foster cultural practices in places often neglected. His view was that every citizen should be considered a producer, and not only a consumer of culture. The hotspots should provide the tools necessary for access, production, and dissemination of local culture, especially for those coming from poor or peripheral areas.

Information technology and the hacker ethic was an integral part of that vision, including incentives for the adoption of free software and Creative Commons, what eventually led to a national discussion about the impact of copyright over cultural production, spurring the the ongoing copyright reform process.

As Mr. Gil put it in his own words in 2005, at a speech he delivered at NYU:

I, Gilberto Gil, Brazilian citizen and citizen of the World, Minister of Culture of Brazil, work with music, at the Ministry, and in all dimensions of my life under the inspiration of the hacker ethic – and concerned with the issues of my world and my time present me, such as the issue of digital inclusion, the issue of free software and the issue of regulation and development of the production and dissemination of audiovisual content by any means, for any purpose.

I want indeed for the Ministry of Culture of Brazil to be a laboratory for new ideas, capable of inventing new procedures for the world’s creative industries, and capable of proposing suggestions aimed at overcoming the present dead ends – I did indeed think that my country should dare and not wait for solutions to come from outside, from societies that would tell us Brazilians which path should be followed for our development, as if our future could only be our becoming a nation such as the ones that exist here or in Europe.

Gil´s speech seems now almost lost in a distant time. The reason is that the newly appointed Ministry of Culture, Mrs. Ana de Hollanda, has taken advantage of her first weeks in office to reverse much of what was built in the past 8 years. By way of example, one of her first actions was to remove the Creative Commons license from the Ministry’s website, without any prior note. The license had been used for the past 6 years, and the Ministry of Culture was actually the pioneer in its adoption at the government level. It is worth noting that the CC licenses continue to be used at other government branches, including the official weblog of president Dilma Roussef. Ironically, at the same day the licenses were taken down by the Ministry of Culture, the Ministry of Planning issued a normative instruction fostering the adoption of open licenses, and expressly mentioning Creative Commons.

This contradiction led prominent politicians in Brazil, including Congress member Paulo Teixeira, to claim that the Ministry of Culture has engaged in policies that conflict with the overall direction of the Federal Government. Mr. Teixeira reminds that during the presidential campaign, president Dilma Roussef met with Lawrence Lessig, founder of Creative Commons, during an important campaign act. She also publicly committed to go ahead with the copyright reform and the digital culture agenda. Before that, in 2009, both president Lula and Dilma (then his Secretary of State) attended together the International Free Software Forum (FISL 10), one of the largest free software global events, which takes place in the city of Porto Alegre. There, Lula’s speech focused on his support to digital culture, Internet freedom and free software.

Other source of criticism is the proximity of the new Minister of Culture with the copyright collecting societies. By way of example, in her first weeks in office, the Minister agreed to meet with Hildebrando Pontes, a lawyer that works for the collecting societies who has become notorious for arguing that copyright should last forever. At the same time, the Ministry declined to meet with representatives of civil society, including those from the “cultural hotspots” program. She then fired the chief copyright officer who led the reform process for the past 6 years, and appointed Mrs. Marcia Regina Barbosa, a lawyer who worked with Hildebrando Pontes.

Collecting societies are a controversial institution in Brazil. They face strong discontentment from rights holders, who claim they are not paid properly. They also face discontentment from their paying “customers”, who claim their criteria for setting royalty prices are simply obscure. They have also been declared by congress inquiry committees as lacking transparency and clear accounting. One of the goals of the copyright reform initiated by Mr. Gilberto Gil was precisely to implement a minimum set of regulation over the collecting societies. By law they have the monopoly over their business, but unlike other countries, no regulation applies to their activities, which remain excused from any sort of independent assessment. Regulation is also supported by many prominent Brazilian musicians, who have recently become vocal about the issue.

The Ministry of Culture change of policy has drawn the attention of both national and international organizations. Even before the Minister´s inauguration, an open letter subscribed by more that 1,500 representatives of civil society organizations in Brazil was posted online expressing concern with the possible change of direction. Folha de São Paulo, the largest newspaper in the country, wrote a piece about the letter. The Minister, however, declined to provide any comments to the journalist. To this date, the letter has not been replied or even acknowledged by the Minister or her staff.

The Minister´s actions, together with the absence of clear statements justifying her decisions, have generated considerable uproar. A public campaign called Sou MinCC (“I am MinCC”) emerged (MinC is the acronym for Ministry of Culture – MinCC is the result of MinC + CC, in reference to the Creative Commons licenses). Besides that, the Commons Strategies Group, an international NGO, prepared an open letter (led by Silke Helfrich at the World Social Forum in Dakar) to President Dilma, also expressing concern about the new policies. The letter was released on February, 21st, and gathered the support of organizations such as Creative Commons, the Free Knowledge Institute (Netherlands), La Quadrature du Net (France), among others.

This is an important moment for the history of cultural policies in Brazil. There is a shared feeling that much of what was built in the past 8 years is at risk. A heated debate took over the Brazilian public sphere, with articles being published by all the major newspapers. The collecting societies and their members have taken the stand to argue in favor of the Minister, claiming that the decisions taken so far are a “sovereign act”, and that the collecting societies should indeed be exempt of any external supervision, and the copyright reform should be halted for good.

But the place where the debate is really developing on a daily basis is the Internet. Bloggers, twittterers and social network members have engaged fiercely in the discussion of the current situation. Many of them were too young to even acknowledge the appointment of Gilberto when he took office. It is a new generation that has risen for the first time to debate the future of culture and technology policies in Brazil. Inadvertently, the new Minister Ana de Hollanda is contributing to the emergence of new generation of voices online. One now can only hope that she will eventually listen to them.

avatar

Seals on NJ voting machines, March 2009

During the NJ voting-machines trial, both Roger Johnston and I showed different ways of removing all the seals from voting machines and putting them back without evidence of tampering. The significance of this is that one can then install fraudulent vote-stealing software in the computer.

The State responded by switching seals yet again, right in the middle of the trial! They replaced the white vinyl adhesive-tape seal with a red tape seal that has an extremely soft and sticky adhesive. In addition, they proposed something really wacky: they would squirt superglue into the blue padlock seal and into the security screw cap.

Nothing better illustrates the State’s “band-aid approach, where serious security vulnerabilities can be covered over with ad hoc fixes” (as Roger characterizes it) than this. The superglue will interfere with the ability for election workers to (legitimately) remove the seal to maintain the machine. The superglue will make it more difficult to detect tampering, because it goes on in such a variable way that the inspector doesn’t know what’s supposed to be “normal.” And the extremely soft adhesive on the tape seal is extremely difficult to clean up, when the election worker (legitimately) removes it to maintain the machine. Of course, one must clean up all the old adhesive before resealing the voting machine.

Furthermore, Roger demonstrated for the Court that all these seals can still be defeated, with or without the superglue. Here’s the judge’s summary of his testimony about all these seals:


New Jersey is proposing to add six different kinds of seals in nine different locations to the voting machines. Johnston testified he has never witnessed this many seals applied to a system. At most, Johnston has seen three seals applied to high-level security applications such as nuclear safeguards. According to Johnston, there is recognition among security professionals that the effective use of a seal requires an extensive use protocol. Thus, it becomes impractical to have a large number of seals installed and inspected. He testified that the use of a large number of seals substantially decreases security, because attention cannot be focused for a very long time on any one of the seals, and it requires a great deal more complexity for these seal-use protocols and for training.

For more details and pictures of these seals, see “Seal Regime #4″ in this paper.

avatar

Do corporations have a "personal privacy" right?

Today, the Supreme Court released its unanimous opinion in Federal Communications Commission v. AT&T Inc., No. 09-1279 (U.S. Mar. 1, 2011)

At issue was the question, “Does a corporation have a “personal privacy” right under the Freedom of Information Act?” In this decision, the United States Supreme Court said “no.” The decision was 8-0 with Associate Justice Kagan not participating in the decision.

What was the case about? A trade association sought disclosure of documents that AT&T had submitted to the FCC during an investigation. AT&T argued that the documents were exempt under FOIA Exemption 7(C), which prohibited disclosure of law enforcement records if the disclosure “could reasonably be expected to constitute an unwarranted invasion of personal privacy.” The United States Court of Appeals for the Third Circuit accepted AT&T’s argument, and held that a corporation could have a “personal privacy” right because a corporation was a “person” under FOIA.

The Supreme Court disagreed. Looking at the express text of FOIA as well as the common meaning of words, Chief Justice Roberts, writing for the Court, held that, absent an express definition of “personal” in FOIA, that word refers to individuals and not corporate entities.

It should be noted that corporations are, for various purposes, considered “persons” under constitutional and common law. However, at issue was a question of statutory interpretation.

The Court even got in a good zinger at the end, noting that, “We trust that AT&T will not take it personally.”