April 20, 2014

avatar

Search Neutrality ? Net Neutrality

Sunday’s New York Times featured a provocative op-ed arguing in addition to regulating “net neutrality” the FCC should also effectuate “search neutrality” – requiring search providers rank results without consideration of business entities. The author heaps particular scorn upon Google for promoting its own context-relevant services (i.e. maps and weather) at the fore of search results. Others have already reviewed the proposal, leveled implementation critiques, and criticized the author’s gripes with his own site. My aim here is to rebut the piece’s core argument: the analogy of search neutrality to net neutrality. Clearly both are debates about the promotion of innovation and competition through a level playing field. But beyond this commonality the parallel breaks down.

Net neutrality advocates call for regulation because ISP discrimination could render innovative services either impossible to implement owing to traffic restrictions or too expensive to deploy owing to traffic pricing. Consumers cannot “vote with their dollars” for a nondiscriminatory ISP since most locales have few providers and the market is hard to break into. Violations of net neutrality, the argument goes, threaten to nip entire industries in the bud and rob the economy of growth.

Violations of search neutrality, on the other hand, at most increase marketing costs for an innovative or competitive offering. Consumers are more than clever enough to seek and use an alternative to a weaker Google offering (Yelp vs. Google restaurant reviews, anyone?). The author of the op-ed cites Google Maps’ dethroning of MapQuest as evidence of the power of search non-neutrality; on the contrary, I would contend users flocked to Google’s service because it was, well, better. If Google Maps featured MapQuest’s clunky interface and vice versa, would you use it? A glance at historical map site statistics empirically rebuts the author’s claim. The mid-May 2007 introduction of Google’s context-relevant (“universal”) search does not appear correlated with any irregular shift in map site traffic.

Moreover, unlike with net neutrality search consumers stand ready to “vote with their [ad] dollars.” Should Google consistently favor its own services to the detriment of search result quality, consumers can effortlessly shift to any of its numerous competitors. It is no coincidence Google sinks enormous manpower into improving result quality.

There may also be a benefit to the increase in marketing costs from existing violations of search neutrality, like Google’s map and weather offerings. If a service would have to be extensively marketed to compete with Google’s promoted offering – say, a current weather site vs. searching for “Stanford weather” – the market is sending a signal that consumers don’t care about the marginal quality of the product, and the non-Google provider should quit the market.

There is merit to the observation that violations of search neutrality are, on the margin, slightly anti-competitive. But this issue is dwarfed by the potential economy-scale implications of net neutrality. The FCC should not deviate in its rulemaking.

avatar

Open Government Workshop at CITP

Here at Princeton’s CITP, we have a healthy interest in issues of open government and government transparency. With the release last week of the Open Government Directive by the Obama Administration, our normally gloomy winter may prove to be considerably brighter.

In addition to creating tools like Recap and FedThread, we’ve also been thinking deeply about the nature of open and transparent government, how system designers and architects can better create transparent systems and how to achieve sustainability in open government. Related to these questions are those of the law.gov effort—providing open access to primary legal materials—and how to best facilitate the tinkerers who work on projects of open government.

These are deep issues, so we thought it best to organize a workshop and gather people from a variety of perspectives to dig in.

If you’re interested, come to our workshop next month! While we didn’t consciously plan it this way, the last day of this workshop corresponds to the first 45-day deadline under the OGD.

Open Government: Defining, Designing, and Sustaining Transparency
January 21–22, 2010
http://citp.princeton.edu/open-government-workshop/

Despite increasing interest in issues of open government and governmental transparency, the values of “openness” and “transparency” have been under-theorized. This workshop will bring together academics, government, advocates and tinkerers to examine a few critical issues in open and transparent government. How can we better conceptualize openness and transparency for government? Are there specific design and architectural needs and requirements placed upon systems by openness and transparency? How can openness and transparency best be sustained? How should we change the provision and access of primary legal materials? Finally, how do we best coordinate the supply of open government projects with the demand from tinkerers?

Anil Dash, Director of the AAAS’ new Expert Labs, will deliver the keynote. We are thrilled with the diverse list of speakers, and are looking forward to a robust conversation.

The workshop is free and open to the public, although we ask that you RSVP to so that we be sure to have a name tag and lunch for you.

avatar

Erroneous DMCA notices and copyright enforcement, part deux

A few weeks ago, I wrote about a deluge of DMCA notices and pre-settlement letters that CoralCDN experienced in late August. This article actually received a bit of press, including MediaPost, ArsTechnica, TechDirt, and, very recently, Slashdot. I’m glad that my own experience was able to shed some light on the more insidious practices that are still going on under the umbrella of copyright enforcement. More transparency is especially important at this time, given the current debate over the Anti-Counterfeiting Trade Agreement.

Given this discussion, I wanted to write a short follow-on to my previous post.

The VPA drops Nexicon

First and foremost, I was contacted by the founder of the Video Protection Alliance not long after this story broke. I was informed that the VPA has not actually developed its own technology to discover users who are actively uploading or downloading copyrighted material, but rather contracts out this role to Nexicon. (You can find a comment from Nexicon’s CTO to my previous article here.) As I was told, the VPA was contracted by certain content publishers to help reduce copyright infringement of (largely adult) content. The VPA in turn contracted Nexicon to find IP addresses that are participating in BitTorrent swarms of those specified movies. Using the IP addresses given them by Nexicon, the VPA subsequently would send pre-settlement letters to the network providers of those addresses.

The VPA’s founder also assured me that their main goal was to reduce infringement, as opposed to collecting pre-settlement money. (And that users had been let off with only a warning, or, in the cases where infringement might have been due to an open wireless network, informed how to secure their wireless network.) He also expressed surprise that there were false positives in the addresses given to them (beyond said open wireless), especially to the extent that appropriate verification was lacking. Given this new knowledge, he stated that the VPA dropped their use of Nexicon’s technology.

BitTorrent and Proxies

Second, I should clarify my claims about BitTorrent’s usefulness with an on-path proxy. While it is true that the address registered with the BitTorrent tracker is not usable, peers connecting from behind a proxy can still download content from other addresses learned from the tracker. If their requests to those addresses are optimistically unchoked, they have the opportunity to even engage in incentivized bilateral exchange. Furthermore, the use of DHT- and gossip-based discovery with other peers—the latter is termed PEX, for Peer EXchange, in BitTorrent—allows their real address to be learned by others. Thus, through these more modern discovery means, other peers may initiate connections to them, further increasing the opportunity for tit-for-tat exchanges.

Some readers also pointed out that there is good reason why BitTorrent trackers do not just accept any IP address communicated to it via an HTTP query string, but rather use the end-point IP address of the TCP connection. Namely, any HTTP query parameter can be spoofed, leading to anybody being able to add another’s IP address to the tracker list. That would make them susceptible to receiving DMCA complaints, just we experienced with CoralCDN. From a more technical perspective, their machine would also start receiving unsolicited TCP connection requests from other BitTorrent peers, an easy DoS amplification attack.

That said, there are some additional checks that BitTorrent trackers could do. For example, if the IP query string or X-Forwarded-For HTTP headers are present, only add the network IP address if it matches the query string or X-Forwarded-For headers. Additionally, some BitTorrent tracker operators have mentioned that they have certain IP addresses whitelisted as trusted proxies; in those cases, the X-Forwarded-For address is used already. Otherwise, I don’t see a good reason (plausible deniability aside) for recording an IP address that is known to be likely incorrect.

Best Practices for Online Technical Copyright Enforcement

Finally, my article pointed out a strategy that I clearly thought was insufficient for copyright enforcement: simply crawling a BitTorrent tracker for a list of registered IP addresses, and issuing a infringement notice to each IP address. I’ll add to that two other approaches that I think are either insufficient, unethical, or illegal—or all three—yet have been bandied about as possible solutions.

  • Wiretapping: It has been suggested that network providers can perform deep-packet inspection (DPI) on their customer’s traffic in order to detect copyrighted content. This approach probably breaks a number of laws (either in the U.S. or elsewhere), creates a dangerous precedent and existing infrastructure for far-flung Internet surveillance, and yet is of dubious benefit given the move to encrypted communication by file-sharing software.
  • Spyware: By surreptitiously installing spyware/malware on end-hosts, one could scan a user’s local disk in order to detect the existence of potentially copyrighted material. This practice has even worse legal and ethical implications than network-level wiretapping, and yet politicians such as Senator Orrin Hatch (Utah) have gone as far as declaring that infringers’ computers should be destroyed. And it opens users up to the real danger that their computers or information could be misused by others; witness, for example, the security weaknesses of China’s Green Dam software.

So, if one starts from the position that copyrights are valid and should be enforceable—some dispute this—what would you like to see as best practices for copyright enforcement?

The approach taken by DRM is to try to build a technical framework that restricts users’ ability to share content or to consume it in a proscribed manner. But DRM has been largely disliked by end-users, mostly in the way it creates a poor user experience and interferes with expected rights (under fair-use doctrine). But DRM is a misleading argument, as copyright infringement notices are needed precisely after “unprotected” content has already flown the coop.

So I’ll start with two properties that I would want all enforcement agencies to take when issuing DMCA take-down notices. Let’s restrict this consideration to complaints about “whole” content (e.g., entire movies), as opposed to those DMCA challenges over sampled or remixed content, which is a legal debate.

  • For any end client suspected of file-sharing, one MUST verify that the client was actually uploading or downloading content, AND that the content corresponded to a valid portion of a copyrighted file. In BitTorrent, this might be that the client sends or receives a complete file block, and that the file block hashes to the correct value specified in the .torrent file.
  • When issuing a DMCA take-down notice, the request MUST be accompanied by logged information that shows (a) the client’s IP:port network address engaged in content transfer (e.g., a record of a TCP flow); (b) the actual application request/response that was acted upon (e.g., BitTorrent-level logs); and (c) that the transferred content corresponds to a valid file block (e.g., a BitTorrent hash).

So my question to the readers: What would you add to or remove from this list? With what other approaches do you think copyright enforcement should be performed or incentivized?

avatar

Another Privacy Misstep from Facebook

Facebook is once again clashing with its users over privacy. As a user myself, I was pretty unhappy about the recently changed privacy control. I felt that Facebook was trying to trick me into loosening controls on my information. Though the initial letter from Facebook founder Mark Zuckerberg painted the changes as pro-privacy — which led more than 48,000 users to click the “I like this” button — the actual effect of the company’s suggested new policy was to allow more public access to information. Though the company has backtracked on some of the changes, problems remain.

Some of you may be wondering why Facebook users are complaining about privacy, given that the site’s main use is to publish private information about yourself. But Facebook is not really about making your life an open book. It’s about telling the story of your life. And like any autobiography, your Facebook-story will include a certain amount of spin. It will leave out some facts and will likely offer more and different levels of detail depending on the audience. Some people might not get to hear your story at all. For Facebook users, privacy means not the prevention of all information flow, but control over the content of their story and who gets to read it.

So when Facebook tries to monetize users’ information by passing that information along to third parties, such as advertisers, users get angry. That’s what happened two years ago with Facebook’s ill-considered Beacon initiative: Facebook started telling advertisers what you had done — telling your story to strangers. But perhaps even worse, Facebook sometimes added items to your wall about what you had purchased — editing your story, without your permission. Users revolted, and Facebook shuttered Beacon.

Viewed through this lens, Facebook’s business dilemma is clear. The company is sitting on an ever-growing treasure trove of information about users. Methods for monetizing this information are many and obvious, but virtually all of them require either telling users’ stories to third parties, or modifying users’ stories — steps that would break users’ mental model of Facebook, triggering more outrage.

What Facebook has, in other words, is a governance problem. Users see Facebook as a community in which they are members. Though Facebook (presumably) has no legal obligation to get users’ permission before instituting changes, it makes business sense to consult the user community before making significant changes in the privacy model. Announcing a new initiative, only to backpedal in the face of user outrage, can’t be the best way to maximize long-term profits.

The challenge is finding a structure that allows the company to explore new business opportunities, while at the same time securing truly informed consent from the user community. Some kind of customer advisory board seems like an obvious approach. But how would the members be chosen? And how much information and power would they get? This isn’t easy to do. But the current approach isn’t working either. If your business is based on user buy-in to an online community, then you have to give that community some kind of voice — you have to make it a community that users want to inhabit.

avatar

The Role of Worst Practices in Insecurity

These days, security advisors talk a lot about Best Practices: establishes procedures that are generally held to yield good results. Deploy Best Practices in your organization, the advisors say, and your security will improve. That’s true, as far as it goes, but often we can make more progress by working to eliminate Worst Practices.

A Worst Practice is something that most of us do, even though we know it’s a bad idea. One current Worst Practice is the way we use passwords to authenticate ourselves to web sites. Sites’ practices drive users to re-use the same password across many sites, and to expose themselves to phishing and keylogging attacks. We know we shouldn’t be doing this, but we keep doing it anyway.

The key to addressing Worst Practices is to recognize that they persist for a reason. If ignorance is the cause, it’s not a Worst Practice — remember that Worst Practices, by definition, are widely known to be bad. There’s typically some kind of collective action problem that sustains a Worst Practice, some kind of Gordian Knot that must be cut before we can eliminate the practice.

This is clearly true for passwords. If you’re building a new web service, and you’re deciding how to authenticate your users, passwords are the easy and obvious choice. Users understand them; they don’t require coordination with any other company; and there’s probably a password-handling module that plugs right into your development environment. Better authentication will be a “maybe someday” feature. Developers make this perfectly rational choice every day — and so we’re stuck with a Worst Practice.

Solutions to this and other Worst Practices will require leadership by big companies. Google, Microsoft, Facebook and others will have to step up and work together to put better practices in place. In the user authentication space we’re seeing some movement with new technologies such as OpenID which reduce the number of places users must log into, thereby easing the move to better practices. But on this and other Worst Practices, we have a long way to go.

Which Worst Practices annoy you? And what can be done to address them? Speak up in the comments.

avatar

DARPA Pays MIT to Pay Someone Who Recruited Someone Who Recruited Someone Who Recruited Someone Who Found a Red Balloon

DARPA, the Defense Department’s research arm, recently sponsored a “Network Challenge” in which groups competed to find ten big red weather balloons that were positioned in public places around the U.S. The first team to discover where all the balloons were would win $40,000.

A team from MIT won, using a clever method of sharing the cash with volunteers. MIT let anyone join their team, and they paid money to the members who found balloons, as well as the people who recruited the balloon-finders, and the people who recruited the balloon-finder-finders. For example, if Alice recruited Bob, and Bob recruited Charlie, and Charlie recruited Diane, and Diane found a balloon, then Alice would get $250, Bob would get $500, Charlie would get $1000, and Diane would get $2000. Multi-level marketing meets treasure hunting! It’s the Amway of balloon-hunting!

On DARPA’s side, this was inspired by the famous Grand Challenge and Urban Challenge, in which teams built autonomous cars that had to drive themselves safely through a desert landscape and then a city.

The autonomous-car challenges have obvious value, both for the military and in ordinary civilian life. But it’s hard to say the same for the balloon-hunting challenge. Granted, the balloon-hunting prize was much smaller, but it’s still hard to avoid the impression that the balloon hunt was more of a publicity stunt than a spur to research. We already knew that the Internet lets people organize themselves into effective groups at a distance. We already knew that people like a scavenger hunt, especially if you offer significant cash prizes. And we already knew that you can pay Internet strangers to do jobs for you. But how are we going to apply what we learned in the balloon hunt?

The autonomous-car challenge has value because it asks the teams to build something that will eventually have practical use. Someday we will all have autonomous cars, and they will have major implications for our transportation infrastructure. The autonomous-car challenge helped to bring that day closer. But will the day ever come when all, or even many, of us will want to pay large teams of people to find things for us?

(There’s more to be said about the general approach of offering challenge prizes as an alternative to traditional research funding, but that’s a topic for another day.)

avatar

Tinkering with Disclosed Source Voting Systems

As Ed pointed out in October, Sequoia Voting Systems, Inc. (“Sequoia”) announced then that it intended to publish the source code of their voting system software, called “Frontier”, currently under development. (Also see EKR‘s post: “Contrarianism on Sequoia’s Disclosed Source Voting System”.)

Yesterday, Sequoia made good on this promise and you can now pull the source code they’ve made available from their Subversion repository here:
http://sequoiadev.svn.beanstalkapp.com/projects/

Sequoia refers to this move in it’s release as “the first public disclosure of source code from a voting systems manufacturer”. Carefully parsed, that’s probably correct: there have been unintentional disclosures of source code (e.g., Diebold in 2003) and I know of two other voting industry companies that have disclosed source code (VoteHere, now out of business, and Everyone Counts), but these were either not “voting systems manufacturers” or the disclosures were not available publicly. Of course, almost all of the research systems (like VoteBox and Helios) have been truly open source. Groups like OSDV and OVC have released or will soon release voting system source code under open source licenses.

I wrote a paper ages ago (2006) on the use of open and disclosed source code for voting systems and I’m surprised at how well that analysis and set of recommendations has held up (the original paper is here, an updated version is in pages 11–41 of my PhD thesis).

The purpose of my post here is to highlight one point of that paper in a bit of detail: disclosed source software licenses need to have a few specific features to be useful to potential voting system evaluators. I’ll start by describing three examples of disclosed source software licenses and then talk about what I’d like to see, as a tinkerer, in these agreements.

The definition of an open source software product is relatively simple: for all practical purposes, anything released under an OSI-approved software license is open source, especially in the sense that one who downloads the source code will have wide latitude to copy, distribute, modify, perform, etc. the source code. What we refer to as disclosed source software is publicly released under a more restrictive license.

Three Disclosed Source Licenses

We have at least three examples of these kinds of licenses from voting systems applications:

  • Sequoia: Sequoia’s license is a good place to start, due to its relative simplicity. It grants the user a limited copyright and patent license for “reference use”, which is defined as (emphasis added):

    “Reference use” means use of the software within your company as a reference, in read only form, for the sole purposes of reviewing and inspecting the code. In addition, this license allows inspection of the code to enhance, or develop ancillary software or hardware products to enhance interoperability of your products with the software. Distribution of this source is allowed provided that this license, and all copyright notices remain in-tact.

    (If you’re a developer and you suspect that you’ve seen this license before, you might have. It’s appears identical to Microsoft’s Reference Source License (Ms-RSL) which is used to distribute things like the .NET Framework under their shared source program. That makes a good deal of sense since Frontier is written in .NET!)

  • Everyone Counts1: Everyone Counts’ (E1C) license is curious. (Unfortunately, I don’t see a copy of this license posted publicly, so I’ll just quote from it.) I say curious mostly because of the flowery language it includes, such as (emphasis added):

    The sources are provided for scrutiny in the same spirit as any member of the public may apply and obtain escorted access to a public election, and is entitled free and open access to election processes to satisfy his or herself of the veracity of the claims of electoral officers and that the process upholds democratic principles. In the same way, the elections observer is not permitted to capture and publish or otherwise make public the processes of the physical count at an election, the same applies for these sources.

    I’m not sure what that last part is about; it’s pretty well accepted—in the U.S.—that “making public the processes of the physical count” is a basic requirement of democratic elections.

    Anyway, on to the substance: It’s a pretty simple license, although more complex than the Sequoia license. The core of this license allows the user to “examine, compile and execute the resulting bytecodes” from the Java source code. It specifies that the user is allowed these rights for the purpose of “analysis forming electoral scrutiny”, which is a difficult phrase to parse. The license suffers from a lot of such wording problems, which make it pretty hard to understand.

  • VoteHere2: The VoteHere license agreement is considerably more complex and looks more like a commercial software license agreement. My favorite part, of course, is:

    TO AVOID ANY DOUBT, THIS SOFTWARE IS NOT BEING LICENSED ON AN OPEN SOURCE BASIS.

    The central component is that VoteHere restricts all your rights, other than copying and modifying the source code for evaluation purposes, and owns any derivative works you create from the source code. It has some other quirks; for example, the license, despite being a click-wrap license, has a hard term of 60 days after which all copies and such must be destroyed. (Presumably, you could click through again after 60 days, and restart the term.)

What Does an Evaluator Need in a License?

Each of these licenses has its strengths and weaknesses. The Sequoia license doesn’t seem to permit modification of the source code but is relatively simple and allows distribution of Sequoia’s unmodified code. The VoteHere and E1C licenses, however, understand that modification may be necessary to evaluate the software (VoteHere even includes a license to the system’s documentation, an essential but often overlooked part of evaluating source code.). The VoteHere license is extremely onerous in that it is very strict and places heavy burdens on the evaluator. The E1C license is flowery and hard to understand, but seems simple at the core and seems to understand what evaluators might need in terms of modifying the code during evaluation.

This raises a good question: What rights, exactly, do evaluators need to examine source code? Practically, it depends on what they want to do. If all they want to do is human line-by-line source code analysis, than a “read-only” license like Sequoia’s is probably fine. However, what about compiling the source code with debugging flags set? What about modifying the software to see how another piece of it performs?

Listed (sort of) in terms of the rights granted by U.S. copyright law, here are some thoughts:

  • Examining: If it doesn’t require making a copy, simply looking at the source code with your eyeballs is not covered by copyright law. So licenses that allow you to “examine” the source code aren’t really granting much, unless they define “examine” to mean something that implicates exclusive rights (which are listed in 17 USC 106).

  • Copying: Of course, downloading and loading source code in an IDE or text editor will make a number of copies, so at the most basic level, evaluators will need to be able to do this. Backup copies are also a necessity these days and VoteHere’s license contemplates this by allowing “reproduc[tion] for temporary archive purposes”.

  • Modification: Evaluators will need some ability to modify the code. Either simply in compiling it to execute it or the next logical step which involves special types of compilation such that “debugging flags” are set (this includes special flags and metadata in the compiled code which allows debugging tools to step through the program, set break points, etc.). Some types of evaluation require minor modifications to integrate the code into an analysis method; a simple example is just changing pieces of the code to see how other pieces will respond or inserting code that prints to the screen (which is a very primitive but useful form of debugging!). Each of these actions creates a derivative work, so that should be explicitly allowed in these licenses.

  • Distribution: At first, you might not think that evaluators would need much in the way of rights to distribute the source code. However, distributing modified works, such as a patch that fixes a bug, could be very useful. Also, being able to share the code if the official means of getting the code goes dark is often useful; for example, having a portal of voting system source code would provide this “mirroring” capability and could also allow creating a “one-stop shop” for tinkerers and researchers who would point research tools at these code repositories for analysis.

  • Performance: Both reading the source code out loud or showing it publicly are also implicated by copyright law. Why would an evaluator want to do this? Well, imagine that you’ve completed an analysis of a disclosed source code product and you want to write up your findings. It’s often useful to include snippets of code. Small snippets would likely be covered by fair use, but it’s always nice to not have to worry about that and have explicit permission to at least “display” and possibly “read aloud” source code in these contexts (think accessible or podcast versions of a report!).

  • Executing: There are a line of legal cases that say that “executing” a program is protected by copyright law due to a copy of the object code being loaded into memory at run time. Hopefully, there’s no reason to believe that permission to make copies, the first and most basic need of evaluators highlighted above, wouldn’t also include this interpretation of “executing” the code.

Outside of these types of exclusive rights, there’s also something to be said for simplicity. The simplicity of the BSD license is a great example: it’s widely used and understood to be very generous and easy to understand. The Sequoia license (being the Ms-RSL license) is very simple and easy to understand. The E1C license is not particularly complex, but it’s substance is hard to understand (again, apologies that I cannot post the text of that license). The VoteHere license is easy to understand but very complex and extremely onerous in terms of the burden it places on evaluators.

As I finally finish writing this, I’m told Sequoia might be interested in modifying their license. That would be a wonderful idea and I hope these thoughts are useful for modifying it. I do wonder how they’ll be able to modify the license and still distribute parts of the .NET Framework under a new license. Perhaps they’ll specify that the .NET parts are under the Ms-RSL and any Sequoia-sourced source code is under Sequoia’s new license. We’ll see!

1 Everyone Counts sells internet voting solutions, which are scary as hell to a lot of us.

2 VoteHere was a company, now seemingly out of business, that made a number of products including a cryptographic add-on module, Sentinel, for the Diebold/Premier AccuVote-TS voting system and a absentee ballot tracking system.

avatar

Soghoian: 8 Million Reasons for Real Surveillance Oversight

If you’re interested at all in surveillance policy, go and read Chris Soghoian’s long and impassioned post today. Chris drops several bombshells into the debate, including an audio recording of a closed-door talk by Sprint/NexTel’s Electronic Surveillance Manager, bragging about how easy the company has made it for law enforcement to get customers’ location data — so easy that the company has serviced over eight million law enforcement requests for customer location data.

Here’s the juiciest quote:

“[M]y major concern is the volume of requests. We have a lot of things that are automated but that’s just scratching the surface. One of the things, like with our GPS tool. We turned it on the web interface for law enforcement about one year ago last month, and we just passed 8 million requests. So there is no way on earth my team could have handled 8 million requests from law enforcement, just for GPS alone. So the tool has just really caught on fire with law enforcement. They also love that it is extremely inexpensive to operate and easy, so, just the sheer volume of requests they anticipate us automating other features, and I just don’t know how we’ll handle the millions and millions of requests that are going to come in.

– Paul Taylor, Electronic Surveillance Manager, Sprint Nextel.

Chris has more quotes, facts, and figures, all implying that electronic surveillance is much more widespread that many of us had thought.

Probably, many of these surveillance requests were justified, in the sense that a fair-minded citizen would think their expected public benefit justified the intrusiveness. How many were justified, we don’t know. We can’t know — and that’s a big part of the problem.

It’s deeply troubling that this has happened without significant public debate or even much disclosure. We need to have a discussion — and quickly — about what the rules for electronic surveillance should be.