November 28, 2024

Google+Motorola = Software Patent Indictment

Google’s announcement this morning that it had agreed to purchase Motorola Mobility for $12.5Billion sent MMI’s stock price soaring and set off another conversation about software patents and the smart-phone ecosystem.

Larry Page himself emphasized the patent angle of the merger in the corporate blog post:

We recently explained how companies including Microsoft and Apple are banding together in anti-competitive patent attacks on Android. The U.S. Department of Justice had to intervene in the results of one recent patent auction to “protect competition and innovation in the open source software community” and it is currently looking into the results of the Nortel auction. Our acquisition of Motorola will increase competition by strengthening Google’s patent portfolio, which will enable us to better protect Android from anti-competitive threats from Microsoft, Apple and other companies.

Android-users already faced several patent lawsuits, and after a coalition of Google’s opponents, including Microsoft, Apple, and Oracle, purchased Nortel’s patent portfolio for $4.5 Billion, Google and its Android partners (including HTC and Motorola) had reason to fear a deepening thicket. Without many patents of its own, Google couldn’t make the traditional counter-strike of suing its attackers for infringement. Motorola’s mobile portfolio (17,000 issued patents and 7,500 pending applications) adds to Android’s arsenal.

Of course Motorola also makes hardware — smartphones that run Android — but few analysts are emphasizing that point. There, the acquisition raises strategic questions for Google: Can it convincingly offer the Android platform to others with whom it now competes? Even if Google maintains Motorola as a separate business, as Page says it intends, will now-competing vendors such as HTC, Samsung, and Acer be reassured of Google+Motorola’s neutrality among them?

Owning a handset maker could improve Android, if it shortens the feedback loop for problem-reporting and new ideas, but it could hurt the platform — and its end-users — more if it scared off competing hardware vendors, shrinking the base to which new applications are written and reducing the diversity of options available to end-users. As proprietor of an open, multi-sided market, Google needs to serve Android’s hardware vendors, app developers, and end-users well enough that a good-sized group of each continue to bring it value — and so the end-users watch the ads whose sale puts money into Google’s pocket from it all. (Oh, and maybe the acquisition will revitalize GoogleTV, as Lauren Weinstein points out.)

The patent motivations are more straightforward. As we know, it doesn’t take deliberate copying to infringe a patent, and patents are granted on small enough increments of software advance that an independently developed application may incorporate dozens to hundreds of elements on which others claim patents, and at millions of dollars a lawsuit, it’s expensive to disprove them. At least if those others are also making phones or software, Google is now more likely to have patents on what they are doing too, paving the way for a cross-license rather than a lawsuit.

Wouldn’t we all be better off skipping those patent threats and cross-licensing transaction costs? As Google’s pre-Motorola travails showed, it’s almost* impossible to opt-out of the patent system by choosing to publish and not patent your own inventions. Unlike in copyright, where you can share under Creative Commons, for example, and just have to prove you never accessed another’s work if accused of infringement, you can only save yourself from patent claims by assuring that every bit of technology you use was published more than 17-20 years ago! (*Rare but not impossible: Richard Hipp of SQLite says he only uses 17-year old, published algorithms to keep his code free of patent clouds.)

In a work-in-progress, I argue that patent’s incentives aren’t working right for software, because they come at too early a stage in development. Patents for software motivate lawsuits more than they induce or reward product development. Google+Motorola may prove to have non-patent benefits too, but its early indications shine a spotlight on the thorny thickets of the patent landscape.

A review of the FVAP UOCAVA workshop

The US Federal Voting Assistance Program (FVAP) is the Department of Defense Agency charged with assisting military and overseas voters with all aspects of voting, including registering to vote, obtaining ballots, and returning ballots. FVAP’s interpretations of Federal law (*) says that they must perform a demonstration of electronic return of marked ballots by overseas military voters (**) in a Federal election at the first Federal election that occurs one year after the adoption of guidelines by the US Election Assistance Commission. Since the EAC hasn’t adopted such guidelines yet (and isn’t expected to for at least another year or two), the clock hasn’t started ticking, so a 2012 demonstration is impossible and a 2014 demonstration looks highly unlikely. Hence, this isn’t a matter of imminent urgency; however, such systems are complex and FVAP is trying to get the ball rolling on what such a system would look like.

As has been discussed previously on this blog, nearly all computer security experts are very concerned about the prospect of marked ballot return over the internet (which we will henceforth refer to as “internet voting”). Issues include vulnerability of client computers, issues with auditability, concerns about usability and coercion, etc. On the flip side, many states and localities are marching full steam ahead on their own internet voting systems, generally ignoring the concerns of computer scientists, and focusing on the perceived greater convenience and hoped-for increased turnout. Many of these systems include email return of marked ballots, which computer scientists generally consider to be even riskier than web-based voting.

FVAP has been caught between the legal mandates and the technical experts. In an effort to break this logjam, they’ve organized a series of open fora – first in August 2010 just before USENIX Security in Washington DC, then in March 2011 just before the Electronic Verification Network workshop in Chicago IL, and last weekend just before USENIX Security in San Francisco CA. All three brought together representatives from FVAP, voting system vendors, election officials, computer scientists, and voting activists to discuss the issues. Several of the Freedom To Tinker bloggers have been present at all three meetings, and have been frustrated that the first two ended at an impasse – computer scientists saying “it doesn’t work” and FVAP (and others) saying “we need a solution anyway”.

Fortunately, the third meeting concluded in a far more constructive way. While all agree there are significant impediments, a consensus was reached that the best solution is a multi-stage competition, in much the same fashion as the National Institute of Standards and Technology (NIST) did for the Advanced Encryption Standard (AES) and is now performing for the Secure Hash Algorithm 3 (SHA-3).

The competition is structured as a series of phases that are completely open, which all expect to be at least somewhat controversial as some organizations (such as vendors) will want to protect their intellectual property. All submissions will be shared with the public, and competitive teams will be encouraged to critique each others’ submissions. In earlier phases this will be focused on the paper requirements and designs; in later phases this may include finding vulnerabilities in architectures and implementations. Submitters may claim patent and/or copyright on their submissions, but these must grant the public (including competitors) rights to use the submissions for analysis, including compiling, testing, and modifying software, for testing purposes. (However, submitters may preclude such use for production or resale purposes.) Thus, trade secrets will be precluded in the competitive process.

The competition will have three phases, each of which may include one or more iterations.

  • In the first phase (which as computer scientists was named “round 0”), submissions will focus on requirements for internet voting systems. Submitters will define characteristics that must be met in following phases. Submissions may also include use cases for which the requirements are applicable – for example, requirements that could apply in environments where all voters have smart cards, such as the US military. As described above, submissions will be open to the public, and anyone (especially submitters) will be encouraged to critique submissions to find the best aspects. At the conclusion of this round, FVAP will (possibly with the assistance of government experts) consolidate the requirements into a single set that will govern the following phase.
  • In the second phase (“round 1”), submissions will provide high level designs and detailed hardware and software architectures, along with procedures necessary for secure operation. The submissions for this round need to be detailed enough that a reasonably skilled person could implement a realization of the system, although many details such as user interfaces and database layouts will be undefined. As with the first phase, submissions will be open for critique. In this phase critiques will focus on identifying areas where designs do not meet the requirements defined in the first phase. The result may be modification of architectures to incorporate ideas from several teams. At the conclusion of this phase, FVAP will (again with assistance from government experts) narrow down the set of acceptable architectures. Or perhaps not – if no architecture is good enough to satisfy the requirements, FVAP may conclude that the experiment should not be run (and cancel the third phase).
  • In the third phase (“round 2”) submitters will create implementations of one or more of the architectures (perhaps even adopting architectures from other teams, if licensing terms permit). During the critique period, teams will seek to find security vulnerabilities in other implementations, and fix problems identified in their own implementation. Usability testing should be part of this phase, as systems too complex for voters to use effectively (even if secure) need to be identified and improved. At the conclusion of this phase, FVAP will identify one or more implementations that are adequate for meeting their demonstration project requirements. Or perhaps not – if no implementation is good enough, FVAP may conclude that the experiment should not be run.

What happens if there is no acceptable solution at the conclusion of the second or third phase? That’s possible – and if it happens, that may be cause for FVAP to request that Congress modify its charter to eliminate the requirement for online blank ballot return. If the best minds in the country conclude that internet voting is a perpetual motion machine, no amount of laws and regulations will make it possible.

How long will all this take? We estimate the entire process will take three or four years, allowing time for FVAP to publish a solicitation, organizations to create submission, the public critique period, FVAPs consolidation and decision making, and transition to the next phase.

In the meantime, there’s little doubt that some states will continue to move forward on the existing insecure solutions. We believe, and expect that most other computer scientists will agree, that this is a case to let science take its course before moving into implementation. We hope that FVAP will speak out publicly against such ill-advised experiments.

For now, we look forward to working with FVAP in realizing the first ever national internet voting competition.

(*) While there is some disagreement on interpretation of the law, since I’m not a lawyer and hence not competent to determine the accuracy of that interpretation, this blog entry presumes that the FVAP interpretation is correct.

(**) The term “military and overseas voters” means both military voters stationed away from their legal home (e.g., at a base in another state or overseas) and civilians living overseas (whether on a temporary basis such as contractors or on a permanent basis). Thus this includes people working for organizations like Peace Corps and embassies as well as expatriates. However, the FVAP mandate for internet voting only applies to overseas military voters, and not domestic military voters or overseas civilians.

Edited Aug 13 @ 12:17pmET: Changed first footnote to explain that I’m not a lawyer and hence not interpreting the law.

Edited Aug 15 @ 1:08pmET: Corrected name of EVN workshop.

The End of Gnutella?

Almost exactly 2 years ago, I wrote an essay that examined the case of Arista Records et al v. Lime Group et al. It was presented on Freedom-to-Tinker in a series of three posts (1, 2, 3). These articles presented an analysis which showed that any open filesharing network, such as Gnutella, is vulnerable to spamming. Lime Wire, without advertising as much, was acting as a spam cop for Gnutella, keeping the network safe for infringers. It was my view that the decision in the case could be made to turn on the actions that Lime Wire was taking to control spammers on the Gnutella network, and if the case were examined in that light, Lime Wire could be found liable for contributory infringement while still respecting the First Amendment rights of software publishers.

Since that time, a great deal has occurred in the world of filesharing. It is worthwhile to examine the the current state of affairs, which is predictable in some ways and yet quite surprising in others.

continue reading…

Retiring FedThread

Nearly two years ago, the Federal Register was published in a structured XML format for the first time. This was a big deal in the open government world: the Federal Register, often called the daily newspaper of our federal government, is one of our government’s most widely read publications. And while it could previously be read in paper and PDF forms, it wasn’t easy to digitally manipulate. The XML release changed all this.

When we heard this was happening, four of us here at CITP—Ari Feldman, Bill Zeller, Joe Calandrino, and myself—decided to see how we might be able to improve how citizens could interact with the Federal Register. Our big idea was to make it easy for anyone to comment paragraph-by-paragraph on any of its documents, like a proposed regulation. The site, which we called FedThread, would provide an informal public forum for annotating these documents, and we hoped it would lead to useful online discussions about the merits and weaknesses of all kinds of federal regulatory activity. We also added other useful features, like a full-text search engine and custom RSS feeds. Building these features for the Federal Register only became a straightforward task because of the new XML version. We built the site in just eight days from conception to release.

Another trio of developers in SF also saw opportunities in this free machine-readable resource and developed their own project called GovPulse, which had already won the Sunlight Foundation’s Apps for America 2 contest. They were then approached by the staff of the Federal Register last summer to expand their site to create what would become the new online face of the publication, Federal Register 2.0. Their approach to user comments actually guided users into participating in the formal regulatory comment process—a great idea. Federal Register 2.0 included several features present in FedThread, and many more. Everything was done using open source tools, and made available to the public as open source.

This has left little reason for us to continue operating FedThread. It has continued to reliably provide the features we developed two years ago, but our regular users will find it straightforward to transition to the similar (and often superior) search and subscription features on Federal Register 2.0. So, we’re retiring FedThread. However, the code that we developed will continue to be available, and we hope that enterprising developers will find components to re-use in their own projects that benefit society. For instance, the general purpose paragraph-commenting code that we developed can be useful in a variety of projects. Of course, that code itself was an adaptation of the code supporting another open source project—the Django Book, a free set of documentation about the web framework that we were using to build FedThread (but this is what developers would call a “meta” observation).

Ideally, this is how hacking open government should work. Free machine readable data sets beget useful new ways for citizens to explore those data and make it useful to other citizens. Along the way, they experiment with different ideas, some of which catch on and others of which serve as fodder for the next great idea. This happens faster than standard government contracting, and often produces more innovative results.

Finally, a big thanks to the GPO, NARA and the White House Open Government Initiative for making FedThread possible and for helping to demonstrate that this approach can work, and congratulations on the fantastic Federal Register 2.0.

Telex and Ethan Zuckerman's "Cute Cat Theory" of Internet Censorship

A few years ago, Ethan Zuckerman gave a talk at CITP on his “cute cat theory” of internet censorship (see also NY Times article), which goes something like this:

Most internet users use the internet and social media tools for harmless activities, like looking at pictures of kittens online. However, an open social media site is open to political content as well as pictures of kittens. Repressive governments might attempt to block this political content by blocking access to, say, all of Blogspot or all of Twitter, but in doing so they also block people from looking at non-political content, like pictures of cute kittens. This both brings more attention to the political causes the government is trying to suppress through the Streisand effect, and can politicize users who previously just wanted unfettered access to cute kittens.

This is great for Web 2.0, and suggests that activists should host their blogs on sites where a lot of kittens would be taken down as collateral damage should they be blocked.

However, what happens when a government is perfectly willing to block all social media? What if a user wants to do more than produce political content on the web?

Telex (blog post) can be seen as a technological method of implementing the cute cat theory for the entire internet: the system allows a user to circumvent internet censorship by executing a secret knock on potentially any web site outside of the censor’s network. When any web site, no matter how innocuous or critical to business or political infrastructure, can be used for a political goal in this fashion, the censorship/anti-censorship cat-and-mouse game is elevated beyond single proxies and lists of blockable Tor nodes, and beyond kittens, to the entire internet.