April 24, 2014

avatar

Brazilian Communications Agency Moves Towards Surveillance Superpowers

January is the month when the Brazilian version of the popular TV show Big Brother returns to the air. For three months, a bunch of people are locked inside a house and their lives are broadcast 24/7. A TV show premised on nonstop surveillance might sound like fun to some people, but it is disturbing when governments engage in similar practices. The Brazilian national communications agency (aka Anatel) announced a few days ago a plan to implement 24/7 surveillance over the more than 203 million cell phones in the country.

As published by Folha de Sao Paulo, the largest newspaper in the country, Anatel has invested about $500,000 in building three central switches that connect directly with the private carrier’s networks. The switches are not for eavesdropping, but will provide the agency with direct access to information such as numbers dialed, date, time, amount paid and duration of all phone calls. It will also provide access to personal information such as name, address and taxpayer number for every mobile customer.

The agency claims that the system will help “modernize” the capability of regulating phone companies, leading to a better quality of service. Currently, the data is privately kept by each phone company. The agency can ask for that information, but has to rely on what is provided. It claims that its technicians “are not prepared to deal with the systems used by the phone carriers and obtain the necessary original information”. So it has decided to collect the information directly, creating its own database in order to “validate” the information directly.

Lawyers and civil rights advocates are worried about this intention to turn Anatel into a “Big Brother” entity. Floriano Marques, an administrative law attorney, claims that the new measure is a “pathology”. He says “it reflects a trend of weakening privacy rights that can be found in various efforts of the public administration in Brazil”. And he is right. Recent events indicate that some public authorities in Brazil have been holding privacy in low regard. In the presidential campaign of 2010, Brazilian tax officials were caught disclosing confidential tax information of members of the political party opposing the government.

Also, a Brazilian Senator called Eduardo Azeredo introduced a bill mandating every citizen to establish his identity through a digital certificate before connecting to the Internet. After causing considerable uproar, the bill was amended to exclude mandatory identification provision, but it still includes disconcerting surveillance provisions, such as the obligation imposed on websites and service providers to keep records of users’ online activities for 5 years.

Lawyers and civil rights activists fear that Anatel’s surveillance superpowers will open the path for all sorts of misuse. They claim the project violates the Brazilian Constitution, which protects privacy as a fundamental right, as well as due process. The agency would gain access to sensitive information without prior permission of users, or any scrutiny by the courts.

Arguably, the implementation of these new provisions by Anatel puts Brazil one step closer to initiatives such as China’s practices of scanning SMS messages for “illegal or unhealthy” content, India’s demands for monitoring communications sent via BlackBerry smartphones, or other countries investing in technical infrastructure to surveil citizens. For the country that once pledged allegiance to the Penguin, in reference to its support to online freedom, free software and free culture policies, the recent developments have been showing an unexpected Orwellian touch.

avatar

Predictions for 2011

As promised, the official Freedom to Tinker predictions for 2011. These predictions are the result of discussions that included myself, Joe Hall, Steve Schultze, Wendy Seltzer, Dan Wallach, and Harlan Yu, but note that we don’t individually agree with every prediction.

  1. DRM technology will still fail to prevent widespread infringement. In a related development, pigs will still fail to fly.
  2. Copyright and patent issues will continue to be stalemated in Congress, with no major legislation on either subject.
  3. Momentum will grow for HTTPS by default, with several major websites adding HTTPS support. Work will begin on adding HTTPS-by-default support to Apache.
  4. Despite substantial attention by Congress to online privacy, the FTC won’t be granted authority to mandate Do Not Track compliance.
  5. Some advertising networks and third-party Web services will begin to voluntarily respect the Do Not Track header, which will be supported by all the major browsers. However, sites will have varying interpretations of what the DNT header requires, leading to accusations that some purportedly DNT-respecting sites are not fully DNT-compliant.
  6. Congress will pass an electronic privacy bill along the lines of the principles set out by the Digital Due Process Coalition.
  7. The seemingly N^2 patent lawsuits among all the major smartphone players will be resolved through a grand cross-licensing bargain, cut in a dark, smoky room, whose terms will only be revealed through some congratulatory emails that leak to the press. None of these lawsuits will get anywhere near a courtroom.
  8. Android smartphones will continue gaining market share, mostly at the expense of BlackBerry and Windows Mobile phones. However, Android’s gains will mostly be at the low end of the market; the iPhone will continue to outsell any single Android smartphone model by a wide margin.
  9. 2011 will see the outbreak of the first massive botnet/malware that attacks smartphones, most likely iPhone or Android models running older software than the latest and greatest. If Android is the target, it will lead to aggressive finger-pointing, particularly given how many users are presently running Android software that’s a year or more behind Google’s latest—a trend that will continue in 2011.
  10. Mainstream media outlets will continue building custom “apps” to present their content on mobile devices. They’ll fall short of expectations and fail to reverse the decline of any magazines or newspapers.
  11. At year’s end, the district court will still not have issued a final judgment on the Google Book Search settlement.
  12. The market for Internet set-top boxes like Google TV and Apple TV will continue to be chaotic throughout 2011, with no single device taking a decisive market share lead. The big winners will be online services like Netflix, Hulu, and Pandora that work with a wide variety of hardware devices.
  13. Online sellers with device-specific consumer stores (Amazon for Kindle books, Apple for iPhone/iPad apps, Microsoft for Xbox Live, etc.) will come under antitrust scrutiny, and perhaps even be dragged into court. Nothing will be resolved before the end of 2011.
  14. With electronic voting machines beginning to wear out but budgets tight, there will be much heated discussion of electronic voting, including antitrust concern over the e-voting technology vendors. But there will be no fundamental changes in policy. The incumbent vendors will continue to charge thousands of dollars for products that cost them a tiny fraction of that to manufacture.
  15. Pressure will continue to mount on election authorities to make it easier for overseas and military voters to cast votes remotely, despite all the obvious-to-everybody-else security concerns. While counties with large military populations will continue to conduct “pilot” studies with Internet voting, with grandiose claims of how they’ve been “proven” secure because nobody bothered to attack them, very few military voters will cast actual ballots over the Internet in 2011.
  16. In contrast, where domestic absentee voters are permitted to use remote voting systems (e.g., systems that transmit blank ballots that the voter returns by mail) voters will do so in large numbers, increasing the pressure to make remote voting easier for domestic voters and further exacerbating security concerns.
  17. At least one candidate for the Republican presidential nomination will express concern about the security of electronic voting machines.
  18. Multiple Wikileaks alternatives will pop up, and pundits will start to realize that mass leaks are enabled by technology trends, not just by one freaky Australian dude.
  19. The RIAA and/or MPAA will be sued over their role in the government’s actions to reassign DNS names owned by allegedly unlawful web sites. Even if the lawsuit manages to get all the way to trial, there won’t be a significant ruling against them.
  20. Copyright claims will be asserted against players even further removed from underlying infringement than Internet/online Service Providers: domain name system participants, ad and payment networks, and upstream hosts. Some of these claims will win at the district court level, mostly on default judgments, but appeals will still be pending at year’s end.
  21. A distributed naming system for Web/broadcast content will gain substantial mindshare and measurable US usage after the trifecta of attacks on Wikileaks DNS, COICA, and further attacks on privacy-preserving or anonymous registration in the ICANN-sponsored DNS. It will go even further in another country.
  22. ICANN still will not have introduced new generic TLDs.
  23. The FCC’s recently-announced network neutrality rules will continue to attract criticism from both ends of the political spectrum, and will be the subject of critical hearings in the Republican House, but neither Congress nor the courts will overturn the rules.
  24. The tech policy world will continue debating the Comcast/Level 3 dispute, but Level 3 will continue paying Comcast to deliver Netflix content, and the FCC won’t take any meaningful actions to help Level 3 or punish Comcast.
  25. Comcast and other cable companies will treat the Comcast/Level 3 dispute as a template for future negotiations, demanding payments to terminate streaming video content. As a result, the network neutrality debate will increasingly focus on streaming high-definition video, and legal academia will become a lot more interested in the economics of Internet interconnection.
avatar

2010 Predictions Scorecard

We’re running a little behind this year, but as we do every year, we’ll review the predictions we made for 2010. Below you’ll find our predictions from 2010 in italics, and the results in ordinary type. Please notify us in the comments if we missed anything.

(1) DRM technology will still fail to prevent widespread infringement. In a related development, pigs will still fail to fly.

We win again! There are many examples, but one that we predicted specifically is that HDCP was cracked. Guess what our first prediction for 2011 will be? Verdict: Right.

(2) Federated DRM systems, such as DECE and KeyChest, will not catch on.

Work on DECE (now renamed UltraViolet) continues to roll forward, with what appears to be broad industry support. It remains to be seen if those devices will actually work well, but the format seems to have at least “caught on” among industry players. We haven’t been following this market too closely, but given that KeyChest seems to mostly be mentioned as an also-ran in UltraViolet stories, its chances don’t look as good. Verdict: Mostly wrong.

(3) Content providers will crack down on online sites that host unlicensed re-streaming of live sports programming. DMCA takedown notices will be followed by a lawsuit claiming actual knowledge of infringing materials and direct financial benefits.

Like their non-live bretheren, live streaming sites like Justin.tv have received numerous DMCA takedown notices for copyrighted content. At the time of this prediction, we were unaware of the lawsuit against Ustream by a boxing promotional company, which began in August 2009. Nonetheless, the trend has continued. In the UK, there was an active game of cat-and-mouse between sports teams and live illegal restreaming sources for football (ahem: soccer) and cricket, which make much of their revenue on selling tickets to live matches. In some cases, a number of pubs were temporarily closed when their licenses were suspended in the face of complaints from content providers. In the US, Zuffa, the parent company for the mixed martial arts production company Ultimate Fighting Championship, sued when a patron at a Boston bar connected his laptop to one of the bar’s TVs to stream a UFC fight from an illicit site (Zuffa is claiming $640k in damages). In July, Zuffa subpoenaed the IP addresses of people uploading its content. And last week UFC sued Justin.tv directly for contributory and vicarious infringement, inducement, and other claims (RECAP docket). Verdict: Mostly right.

(4) Major newspaper content will continue to be available online for free (with ads) despite cheerleading for paywalls by Rupert Murdoch and others.

Early last year, the New York Times announced its intention to introduce a paywall in January 2011, and that plan still seems to be on track, but didn’t actually happen in 2010. The story is the same at the Philly Inquirer, which is considering a paywall but hasn’t put one in place. The Wall Street Journal was behind a paywall already. Other major papers, including the Los Angeles Times, the Washington Post, and USA Today, seem to be paywall-free. The one major paper we could find that did go behind a paywall is the Times of London went behind a paywall in July, with predictably poor results. Verdict: Mostly right.

(5) The Supreme Court will strike down pure business model patents in its Bilski opinion. The Court will establish a new test for patentability, rather than accepting the Federal Circuit’s test. The Court won’t go so far as to ban software patents, but the implications of the ruling for software patents will be unclear and will generate much debate.

The Supreme Court struck down the specific patent at issue in the case, but it declined to invalidate business method patents more generally. It also failed to articulate a clear new test. The decision did generate plenty of debate, but that went without saying. Verdict: Wrong.

(6) Patent reform legislation won’t pass in 2010. Calls for Congress to resolve the post-Bilski uncertainty will contribute to the delay.

Another prediction that works every year. Verdict: Right.

(7) After the upcoming rulings in Quon (Supreme Court), Comprehensive Drug Testing (Ninth Circuit or Supreme Court) and Warshak (Sixth Circuit), 2010 will be remembered as the year the courts finally extended the full protection of the Fourth Amendment to the Internet.

The Supreme Court decided Quon on relatively narrow grounds and deferred on the Fourth Amendment questions on electronic privacy, and the Ninth Circuit in Comprehensive Drug Testing dismissed the lower court's privacy-protective guidelines for electronic searches. However, the big privacy decision of the year was in Warshak, where the Sixth Circuit ruled strongly in favor of the privacy of remotely stored e-mail. Paul Ohm said of the decision: “It may someday be seen as a watershed moment in the extension of our Constitutional rights to the Internet.” Verdict: Mostly right.

(8) Fresh evidence will come to light of the extent of law enforcement access to mobile phone location-data, intensifying the debate about the status of mobile location data under the Fourth Amendment and electronic surveillance statutes. Civil libertarians will call for stronger oversight, but nothing will come of it by year’s end.

Even though we didn’t learn anything significant and new about the extent of government access to mobile location data, the debate around “cell-site” tracking privacy certainly intensified, in Congress, in the courts and in the public eye. The issue gained significant public attention through a trio of pro-privacy victories in the federal courts and Congress held a hearing on ECPA reform that focused specifically on location-based services. Despite the efforts of the Digital Due Process Coalition, no bills were introduced in Congress to reform and clarify electronic surveillance statutes. Verdict: Mostly right.

(9) The FTC will continue to threaten to do much more to punish online privacy violations, but it won’t do much to make good on the threats.

As a student of the FTC’s Chief Technologist, I’m not touching this one with a ten-foot pole.

(10) The new Apple tablet will be gorgeous but expensive. It will be a huge hit only if it offers some kind of advance in the basic human interface, such as a really effective full-sized on-screen keyboard.

Gorgeous? Check. Expensive? Check. Huge hit? Check. Advance in the basic human interface? The Reality Distortion Field forces me to say “yes.” Verdict: Mostly right.

(11) The disadvantages of iTunes-style walled garden app stores will become increasingly evident. Apple will consider relaxing its restrictions on iPhone apps, but in the end will offer only rhetoric, not real change.

Apple’s iPhone faced increasingly strong competition from Google’s rival Android platform, and it’s possible this could be attributed to Google’s more liberal policies for allowing apps to run on Android devices. Still, iPhones and iPads continued to sell briskly, and we’re not aware of any major problems arising from Apple’s closed business model. Verdict: Wrong.

(12) Internet Explorer’s usage share will fall below 50 percent for the first time in a decade, spurred by continued growth of Firefox, Chrome, and Safari.

There’s no generally-accepted yardstick for browser usage share, because there are so many different ways to measure it. But Wikipedia has helpfully aggregated browser usage share statistics. All five metrics listed there show the usage share falling by between 5 and 10 percent over the last years, with current values being between 41 to 61 percent. The mean of these statistics is 49.5 percent, and the median is 46.94 percent. Verdict: Right.

(13) Amazon and other online retailers will be forced to collect state sales tax in all 50 states. This will have little impact on the growth of their business, as they will continue to undercut local bricks-and-mortar stores on prices, but it will remove their incentive to build warehouses in odd places just to avoid having to collect sales tax.

State legislators continue to introduce proposals to tax out-of-state retailers, but Amazon has fought hard against these proposals, and so far the company has largely kept them at bay. Verdict: Wrong.

(14) Mobile carriers will continue locking consumers in to long-term service contracts despite the best efforts of Google and the handset manufacturers to sell unlocked phones.

Google’s experiment selling the Nexus One directly to consumers via the web ended in failure after about four months. T-Mobile, traditionally the nation’s least restrictive national wireless carrier, recently made it harder for consumers to find its no-contract “Even More Plus” plans. It’s still possible to get an unlocked phone if you really want one, but you have to pay a hefty premium, and few consumers are bothering. Verdict: Right.

(15) Palm will die, or be absorbed by Research In Motion or Microsoft.

This prediction was almost right. Palm’s Web OS didn’t catch on, and in April the company was acquired by a large IT firm. However, that technology firm was HP, not RIM or Microsoft. Verdict: Half right.

(16) In July, when all the iPhone 3G early adopters are coming off their two-year lock-in with AT&T, there will be a frenzy of Android and other smartphone devices competing for AT&T’s customers. Apple, no doubt offering yet another version of the iPhone at the time, will be forced to cut its prices, but will hang onto its centralized app store. Android will be the big winner in this battle, in terms of gained market share, but there will be all kinds of fragmentation, with different carriers offering slightly different and incompatible variants on Android.

Almost everything we predicted here happened. The one questionable prediction is the price cut, but we’re going to say that this counts. Verdict: Right.

(17) Hackers will quickly sort out how to install their own Android builds on locked-down Android phones from all the major vendors, leading to threatened or actual lawsuits but no successful legal action taken.

The XDA Developers Forum continues to be the locus for this type of Android hacking, and this year it did not disappoint. The Droid X was rooted and the Droid 2 was rooted, along with many other Android phones. The much-anticipated T-Mobile G2 came with a new lock-down mechanism based in hardware. HTC wasn’t initially forthcoming with the legally-mandated requirement to publish their modifications to the Linux source code that implemented this mechanism, but relented after a Freedom to Tinker post generated some heat. The crack took about a month, and now G2 owners are able to install their own Android builds. Verdict: Right.

(18) Twitter will peak and begin its decline as a human-to-human communication medium.

We’re not sure how to measure this prediction, but Twitter recently raised another $200 million in venture capital and its users exchanged 250 billion tweets in 2010. That doesn’t look like decline to us. Verdict: Wrong.

(19) A politican or a candidate will commit a high-profile “macaca”-like moment via Twitter.

We can’t think of any good examples of high-profile cases that severely affected a politician’s prospects in the 2010 elections, like the “macaca” comment did to George Allen’s 2006 Senate campaign. However, there were a number of more low-profile gaffes, including Sarah Palin’s call for peaceful muslims to “refudiate” the “Ground Zero Mosque” (the New Oxford American Dictionary named refudiate its word of the year), then-Senator Chris Dodd’s staff mis-tweeting inappropriate comments and a technical glitch in computer software at the U.S. embassy in Beijing tweeting that the air quality one day was “crazy bad”. Verdict: Mostly wrong.

(20) Facebook customers will become increasingly disenchanted with the company, but won’t leave in large numbers because they’ll have too much information locked up in the site.

In May 2010, Facebook once again changed its privacy policy to make more Facebook user information available to more people. On two occasions, Facebook has faced criticism for leaking user data to advertisers. But the site doesn’t seem to have declined in popularity. Verdict: Right.

(21) The fashionable anti-Internet argument of 2010 will be that the Net has passed its prime, supplanting the (equally bogus) 2009 fad argument that the Internet is bad for literacy.

Wired declared the web dead back in August. Is that the same thing as saying the Net has passed its prime? Bogus arguments all sound the same to us. Verdict: Mostly right.

(22) One year after the release of the Obama Administration’s Open Government Directive, the effort will be seen as a measured success. Agencies will show eagerness to embrace data transparency but will find the mechanics of releasing datasets to be long and difficult. Privacy– how to deal with personal information available in public data– will be one major hurdle.

Many people are calling this open government’s “beta period.” Federal agencies took the landmark step in January by releasing their first “high-value” datasets on Data.gov, but some advocates say these datasets are not “high value” enough. Agencies also published their plans for open government—some were better than others—and implementation of these promises has indeed been incremental. Privacy has been an issue in many cases, but it’s often difficult to know the reasons why an agency decides not to release a dataset. Verdict: Mostly right.

(23) The Open Government agenda will be the bright spot in the Administration’s tech policy, which will otherwise be seen as a business-as-usual continuation of past policies.

As we noted above, the Obama administration has had a pretty good record on open government issues. Probably the most controversial tech policy change has been the FCC’s adoption of new network neutrality rules. These weren’t exactly a continuation of Bush administration policies, but they also didn’t go as far as many activist groups wanted. And we can think of any other major tech policy changes. Verdict: Mostly right.

Our score: 7 right, 8 mostly right, 1 half right, 2 mostly wrong, 4 wrong.

avatar

Seals on NJ voting machines, 2004-2008

I have just released a new paper entitled Security seals on voting machines: a case study and here I’ll explain how I came to write it.

Like many computer scientists, I became interested in the technology of vote-counting after the technological failure of hanging chads and butterfly ballots in 2000. In 2004 I visited my local polling place to watch the procedures for closing the polls, and I noticed that ballot cartridges were sealed by plastic strap seals like this one:

plastic strap seal

The pollworkers are supposed to write down the serial numbers on the official precinct report, but (as I later found when Ed Felten obtained dozens of these reports through an open-records request), about 50% of the time they forget to do this:

In 2008 when (as the expert witness in a lawsuit) I examined the hardware and software of New Jersey’s voting machines, I found that there were no security seals present that would impede opening the circuit-board cover to replace the vote-counting software. The vote-cartridge seal looks like it would prevent the cover from being opened, but it doesn’t.

There was a place to put a seal on the circuit-board cover, through the hole labeled “DO NOT REMOVE”, but there was no seal there:

Somebody had removed a seal, probably a voting-machine repairman who had to open the cover to replace the batteries, and nobody bothered to install a new one.

The problem with paperless electronic voting machines is that if a crooked political operative has access to install fraudulent software, that software can switch votes from one candidate to another. So, in my report to the Court during the lawsuit, I wrote,


10.6. For a system of tamper-evident seals to provide effective protection, the seals must be consistently installed, they must be truly tamper-evident, and they must be consistently inspected. With respect to the Sequoia AVC Advantage, this means that all five of the
following would have to be true. But in fact, not a single one of these is true in practice, as I will explain.

  1. The seals would have to be routinely in place at all times when an attacker might wish to access the Z80 Program ROM; but they are not.
  2. The cartridge should not be removable without leaving evidence of tampering with
    the seal; but plastic seals can be quickly defeated, as I will explain.

  3. The panel covering the main circuit board should not be removable without removing the [vote-cartridge] seal; but in fact it is removable without disturbing the seal.
  4. If a seal with a different serial number is substituted, written records would have to reliably catch this substitution; but I have found major gaps in these records in New Jersey.
  5. Identical replacement seals (with duplicate serial numbers) should not exist; but the evidence shows that no serious attempt is made to avoid duplication.

Those five criteria are just common sense about what would be a required in any effective system for protecting something using tamper-indicating seals. What I found was that (1) the seals aren’t always there; (2) even if they were, you can remove the cartridge without visible evidence of tampering with the seal and (3) you can remove the circuit-board cover without even disturbing the plastic-strap seal; (4) even if that hadn’t been true, the seal-inspection records are quite lackadaisical and incomplete; and (5) even if that weren’t true, since the counties tend to re-use the same serial numbers, the attacker could just obtain fresh seals with the same number!

Since the time I wrote that, I’ve learned from the seal experts that there’s a lot more to a seal use protocol than these five observations. I’ll write about that in the near future.

But first, I’ll write about the State of New Jersey’s slapdash response to my first examination of their seals. Stay tuned.

avatar

If Wikileaks Scraped P2P Networks for "Leaks," Did it Break Federal Criminal Law?

On Bloomberg.com today, Michael Riley reports that some of the documents hosted at Wikileaks may not be “leaks” at all, at least not in the traditional sense of the word. Instead, according to a computer security firm called Tiversa, “computers in Sweden” have been searching the files shared on p2p networks like Limewire for sensitive and confidential information, and the firm supposedly has proof that some of the documents found in this way have ended up on the Wikileaks site. These charges are denied as “completely false in every regard” by Wikileaks lawyer Mark Stephens.

I have no idea whether these accusations are true, but I am interested to learn from the story that if they are true they might provide “an alternate path for prosecuting WikiLeaks,” most importantly because the reporter attributes this claim to me. Although I wasn’t misquoted in the article, I think what I said to the reporter is a few shades away from what he reported, so I wanted to clarify what I think about this.

In the interview and in the article, I focus only on the Computer Fraud and Abuse Act (“CFAA”), the primary federal law prohibiting computer hacking. The CFAA defines a number of federal crimes, most of which turn on whether an action on a computer or network was done “without authorization” or in a way that “exceeds authorized access.”

The question presented by the reporter to me (though not in these words) was: is it a violation of the CFAA to systematically crawl a p2p network like Limewire searching for and downloading files that might be mistakenly shared, like spreadsheets or word processing documents full of secrets?

I don’t think so. With everything I know about the text of this statute, the legislative history surrounding its enactment, and the cases that have interpreted it, this kind of searching and downloading won’t “exceed the authorized access” of the p2p network. This simply isn’t a crime under the CFAA.

But although I don’t think this is a viable theory, I can’t unequivocally dismiss it for a few reasons, all of which I tried to convey in the interview. First, some courts have interpreted “exceeds authorized access” broadly, especially in civil lawsuits arising under the CFAA. For example, back in 2001, one court declared it a CFAA violation to utilize a spider capable of collecting prices from a travel website by a competitor, if the defendant built the spider by taking advantage of “proprietary information” from a former employee of the plaintiff. (For much more on this, see this article by Orin Kerr.)

Second, it seems self-evident that these confidential files are being shared on accident. The users “leaking” these files are either misunderstanding or misconfiguring their p2p clients in ways that would horrify them, if only they knew the truth. While this doesn’t translate directly into “exceeds authorized access,” it might weigh heavily in court, especially if the government can show that a reasonable searcher/downloader would immediately and unambiguously understand that the files were shared on accident.

Third, let’s be realistic: there may be judges who are so troubled by what they see as the harm caused by Wikileaks that they might be willing to read the open-textured and mostly undefined terms of the CFAA broadly if it might help throw a hurdle in Wikileaks’ way. I’m not saying that judges will bend the law to the facts, but I think that with a law as vague as the CFAA, multiple interpretations are defensible.

But I restate my conclusion: I think a prosecution under the CFAA against someone for searching a p2p network should fail. The text and caselaw of the CFAA don’t support such a prosecution. Maybe it’s “not a slam dunk either way,” as I am quoted saying in the story, but for the lawyers defending against such a theory, it’s at worst an easy layup.

avatar

Web Browser Security User Interfaces: Hard to Get Right and Increasingly Inconsistent

A great deal of online commerce, speech, and socializing supposedly happens over encrypted protocols. When using these protocols, users supposedly know what remote web site they are communicating with, and they know that nobody else can listen in. In the past, this blog has detailed how the technical protocols and legal framework are lacking. Today I’d like to talk about how secure communications are represented in the browser user interface (UI), and what users should be expected to believe based on those indicators.

The most ubiquitous indicator of a “secure” connection on the web is the “padlock icon.” For years, banks, commerce sites, and geek grandchildren have been telling people to “look for the lock.” However, The padlock has problems. First, it has been shown in user studies that despite all of the imploring, many people just don’t pay attention. Second, when they do pay attention, the padlock often gives them the impression that the site they are connecting to is the real-world person or company that the site claims to be (in reality, it usually just means that the connection is encrypted to “somebody”). Even more generally, many people think that the padlock means that they are “safe” to do whatever they wish on the site without risk. Finally, there are some tricky hacker moves that can make it appear that a padlock is present when it actually is not.

A few years ago, a group of engineers invented “Extended Validation(EV) certificates. As opposed to “Domain Validation(DV) certs that simply verify that you are talking to “somebody” who owns the domain, EV certificates actually do verify real-world identities. They also typically cause some prominent part of the browser to turn green and show the real-world entity’s name and location (eg: “Bank of America Corporation (US)”). Separately, the W3 Consortium recently issued a final draft of a document entitled “Web Security Context: User Interface Guidelines.” The document describes web site “identity signals,” saying that the browser must “make information about the identity of the Web site that a user interacts with available.” These developments highlight a shift in browser security UI from simply showing a binary padlock/no-padlock icon to showing more rich information about identity (when it exists).

In the course of trying to understand all of these changes, I made a disturbing discovery: different browser vendors are changing their security UI’s in different ways. Here are snapshots from some of the major browsers:

As you can see, all of the browsers other than Firefox still have a padlock icon (albeit in different places). Chrome now makes “https” and the padlock icon green regardless of whether it is DV or EV (see the debate here), whereas the other browsers reserve the green color for EV only. The confusion is made worse by the fact that Chrome appears to contain a bug in which the organization name/location (the only indication of EV validation) sometimes does not appear. Firefox chose to use the color blue for DV even though one of their user experience guys noted, “The color blue unfortunately carries no meaning or really any form of positive/negative connotation (this was intentional and the rational[e] is rather complex)”. The name/location from EV certificates appear in different places, and the method of coloring elements also varies (Safari in particular colors only the text, and does so in dark shades that can sometimes be hard to discern from black). Some browsers also make (different) portions of the url a shade of gray in an attempt to emphasize the domain you are visiting.

Almost all of the browsers have made changes to these elements in recent versions. Mozilla has been particularly aggressively changing Firefox’s user interface, with the most dramatic change being the removal of the padlock icon entirely as of Firefox 4. Here is the progression in changes to the UI when visiting DV-certified sites:

By stepping back to Firefox 2.0, we can see a much more prominent padlock icon in both the URL bar and in the bottom-right “status bar” along with an indication of what domain is being validated. Firefox 3.0 toned down the color scheme of the lock icon, making it less attention grabbing and removing it from the URL bar. It also removed the yellow background that the URL bar would show for encrypted sites, and introduced a blue glow around the site icon (“favicon”) if the site provided a DV cert. This area was named the “site identification button,” and is either grey, blue, or green depending on the level of security offered. Users can click on the button to get more information about the certificate, presuming they know to do so. At some point between Firefox 3.0 and 3.6, the domain name was moved from the status bar (and away from the padlock icon) to the “site identification button”.

In the soon-to-be-released Firefox 4 is the padlock icon removed altogether. Mozilla actually removed the “status bar” at the bottom of the screen completely, and the padlock icon with it. This has caused consternation among some users, and generated about 35k downloads of an addon that restores some of the functionality of the status bar (but not the padlock).

Are these changes a good thing? On the one hand, movement toward a more accurately descriptive system is generally laudable. On the other, I’m not sure whether there has been any study about how users interpret the color-only system — especially in the context of varying browser implementations. Anecdotally, I was unaware of the Firefox changes, and I had a moment of panic when I had just finished a banking transaction using a Firefox 4 beta and realized that there was no lock icon. I am not the only one. Perhaps I’m an outlier, and perhaps it’s worth the confusion in order to move to a better system. However, at the very least I would expect Mozilla to do more to proactively inform users about the changes.

It seems disturbing that the browsers are diverging in their visual language of security. I have heard people argue that competition in security UI could be a good thing, but I am not convinced that any benefits would outweigh the cost of confusing users. I’m also not sure that users are aware enough of the differences that they will consider it when selecting a browser… limiting the positive effects of any competition. What’s more, the problem is only set to get worse as more and more browsing takes place on mobile devices that are inherently constrained in what they can cram on the screen. Just take a look at iOS vs. Android:

To begin with, Mobile Safari behaves differently from desktop Safari. The green color is even harder to see here, and one wonders whether the eye will notice any of these changes when they appear in the browser title bar (this is particularly evident when browsing on an iPad). Android’s browser displays a lock icon that is identical for DV and EV sites. Windows Phone 7 behaves similarly, but only when the URL bar is present — and the URL bar is automatically hidden when you rotate your phone into landscape mode. Blackberry shows a padlock icon inconspicuously in the top status bar of the phone (the same area as your signal strength and battery status). Blackberry uniquely shows an unlocked padlock icon when on non-encrypted sites, something I don’t remember in desktop browsers since Netscape Navigator (although maybe it’s a good idea to re-introduce some positive indication of “not encrypted”).

Some of my more cynical realistic colleagues have said that given the research showing that most users don’t pay attention to this stuff anyway, trying to fix it is pointless. I am sympathetic to that view, and I think that making more sites default to HTTPS, encouraging adoption of standards like HSTS, and working on standards to make it easier to encrypt web communications are probably lower hanging fruit. There nevertheless seems to be an opportunity here for some standardization amongst the browser vendors, with a foundation in actual usability testing.

avatar

Some Technical Clarifications About Do Not Track

When I last wrote here about Do Not Track in August, there were just a few rumblings about the possibility of a Do Not Track mechanism for online privacy. Fast forward four months, and Do Not Track has shot to the top of the privacy agenda among regulators in Washington. The FTC staff privacy report released in December endorsed the idea, and Congress was quick to hold a hearing on the issue earlier this month. Now, odds are quite good that some kind of Do Not Track legislation will be introduced early in this new congressional session.

While there isn’t yet a concrete proposal for Do Not Track on the table, much has already been written both in support of and against the idea in general, and it’s terrific to see the issue debated so widely. As I’ve been following along, I’ve noticed some technical confusion on a few points related to Do Not Track, and I’d like to address three of them here.

1. Do Not Track will most likely be based on an HTTP header.

I’ve read some people still suggesting that Do Not Track will be some form of a government-operated list or registry—perhaps of consumer names, device identifiers, tracking domains, or something else. This type of solution has been suggested before in an earlier conception of Do Not Track, and given its rhetorical likeness to the Do Not Call Registry, it’s a natural connection to make. But as I discussed in my earlier post—the details of which I won’t rehash here—a list mechanism is a relatively clumsy solution to this problem for a number of reasons.

A more elegant solution—and the one that many technologists seem to have coalesced around—is the use of a special HTTP header that simply tells the server whether the user is opting out of tracking for that Web request, i.e. the header can be set to either “on” or “off” for each request. If the header is “on,” the server would be responsible for honoring the user’s choice to not be tracked. Users would be able to control this choice through the preferences panel of the browser or the mobile platform.

2. Do Not Track won’t require us to “re-engineer the Internet.”

It’s also been suggested that implementing Do Not Track in this way will require a substantial amount of additional work, possibly even rising to the level of “re-engineering the Internet.” This is decidedly false. The HTTP standard is an extensible one, and it “allows an open-ended set of… headers” to be defined for it. Indeed, custom HTTP headers are used in many Web applications today.

How much work will it take to implement Do Not Track using the header? Generally speaking, not too much. On the client-side, adding the ability to send the Do Not Track header is a relatively simple undertaking. For instance, it only took about 30 minutes of programming to add this functionality to a popular extension for the Firefox Web browser. Other plug-ins already exist. Implementing this functionality directly into the browser might take a little bit longer, but much of the work will be in designing a clear and easily understandable user interface for the option.

On the server-side, adding code to detect the header is also a reasonably easy task—it takes just a few extra lines of code in most popular Web frameworks. It could take more substantial work to program how the server behaves when the header is “on,” but this work is often already necessary even in the absence of Do Not Track. With industry self-regulation, compliant ad servers supposedly already handle the case where a user opts out of their behavioral advertising programs, the difference now being that the opt-out signal comes from a header rather than a cookie. (Of course, the FTC could require stricter standards for what opting-out means.)

Note also that contrary to some suggestions, the header mechanism doesn’t require consumers to identify who they are or otherwise authenticate to servers in order to gain tracking protection. Since the header is a simple on/off flag sent with every Web request, the server doesn’t need to maintain any persistent state about users or their devices’ opt-out preferences.

3. Microsoft’s new Tracking Protection feature isn’t the same as Do Not Track.

Last month, Microsoft announced that its next release of Internet Explorer will include a privacy feature called Tracking Protection. Mozilla is also reportedly considering a similar browser-based solution (although a later report makes it unclear whether they actually will). Browser vendors should be given credit for doing what they can from within their products to protect user privacy, but their efforts are distinct from the Do Not Track header proposal. Let me explain the major difference.

Browser-based features like Tracking Protection basically amount to blocking Web connections from known tracking domains that are compiled on a list. They don’t protect users from tracking by new domains (at least until they’re noticed and added to the tracking list) nor from “allowed” domains that are tracking users surreptitiously.

In contrast, the Do Not Track header compels servers to cooperate, to proactively refrain from any attempts to track the user. The header could be sent to all third-party domains, regardless of whether the domain is already known or whether it actually engages in tracking. With the header, users wouldn’t need to guess whether a domain should be blocked or not, and they wouldn’t have to risk either allowing tracking accidentally or blocking a useful feature.

Tracking Protection and other similar browser-based defenses like Adblock Plus and NoScript are reasonable, but incomplete, interim solutions. They should be viewed as complementary with Do Not Track. For entities under FTC jurisdiction, Do Not Track could put an effective end to the tracking arms race between those entities and browser-based defenses—a race that browsers (and thus consumers) are losing now and will be losing in the foreseeable future. For those entities outside FTC jurisdiction, blocking unwanted third parties is still a useful though leaky defense that maintains the status quo.

Information security experts like to preach “defense in depth” and it’s certainly vital in this case. Neither solution fully protects the user, so users really need both solutions to be available in order to gain more comprehensive protection. As such, the upcoming features in IE and Firefox should not be seen as a technical substitute for Do Not Track.

——

To reiterate: if the technology that implements Do Not Track ends up being an HTTP header, which I think it should be, it would be both technically feasible and relatively simple. It’s also distinct from recent browser announcements about privacy in that Do Not Track forces server cooperation, while browser-based defenses work alone to fend off tracking.

What other technical issues related to Do Not Track remain murky to readers? Feel free to leave comments here, or if you prefer on Twitter using the #dntrack tag and @harlanyu.

avatar

CITP Visitors Application Deadline Extended to Feb 1st

The deadline for applications to CITP’s Visitors Program has been extended to February 1st. If you or someone you know is interested but has questions, feel free to contact me at

The Center has secured limited resources from a range of sources to support visiting faculty, scholars or policy experts for up to one-year appointments during the 2011-2012 academic year. We are interested in applications from academic faculty and researchers as well as from individuals who have practical experience in the policy arena. The rank and status of the successful applicant(s) will be determined on a case-by-case basis. We are particularly interested in hearing from faculty members at other universities and from individuals who have first-hand experience in public service in the technology policy area.

For more details and instructions about how to apply, see the full description here.

avatar

RIP Bill Zeller

All of us here at CITP were saddened by the death of Bill Zeller, our respected and much-loved colleague. Bill was a Ph.D. candidate in Computer Science here at Princeton, who died last night due to injuries sustained in a suicide attempt.

There has been a huge outpouring of sympathy for Bill, both at Princeton and across the Internet, which is entirely appropriate. But I’d like to focus here on the positive side of Bill’s life.

Bill has made at least two appearances here on Freedom to Tinker, first as the instigator of the Miraculin experiment (Miracle Fruit: Tinkering with our Taste Buds), then later for his research on web security (Popular Websites Vulnerable to Cross-Site Request Forgery Attacks).

Bill always had a new project brewing. His projects ranged from the quirky (the cult favorite Cats in Christmas Trees site) to an early blogging tool (Zempt, which was incorporated into Movable Type) to many useful software development tools (such as jLambda). Tens of millions of people have read or used something that Bill created.

Bill’s sense of humor was much appreciated by his friends. He would sometimes go to considerable lengths for the sake of a joke. Once, for the sake of an office joke, he created a technology package including an online game, an RSS-based miniblogging tool, and a screen saver. Then, later, he shut it all down, as a birthday present for the friend who was the target of his (good-natured) joke.

We have many, many fond memories of Bill, more than we could possibly fit here.

Those of you who knew Bill are invited to add your own fond memories in the comments.

avatar

Monitoring all the electrical and hydraulic appliances in your house

Dan Wallach recently wrote about his smart electric meter, which keeps track of the second-by-second current draw of his whole house. But what he might like to know is, exactly what appliance is on at what time? How could you measure that?

You might think that one would have to instrument each different circuit at the breaker box, or every individual electric plug at the outlet. This would be expensive, not particularly for all the little sensors but for the labor of an electrician to install everything.

Recent “gee whiz” research by Professor Shwetak Patel‘s group at the University of Washington provides a really elegant solution. Every appliance you own–your refrigerator, your flat-screen TV, your toaster–has a different “electrical noise signature” that it draws from the wires in your house. When you turn it on, this signal is (inadvertently) sent through the electric wires to the circuit-breaker box. It’s not necessary (as one commenter suggested) to buy “smart appliances” that send purpose-designed on-off signals; your “dumb” appliances already send their own noise signatures.

Patel’s group built a device that you plug in to an electrical outlet, which figures out when your appliances are turning on and off. The device is equipped with a database of common signatures (it can tell one brand of TV from another!) and with machine-learning algorithms that figure out the unique characteristics of your particular devices (if you have two “identical” Toshiba TVs, it can tell them apart!). Patel’s device could be an extremely useful “green technology” to help consumers painlessly reduce their electricity consumption.

Patel can do the same trick on your water pipes. Each toilet flush or shower faucet naturally sends a different acoustic pressure signal, and a single sensor can monitor all your devices.

Of course, in addition to the “green” advantages of this technology, there are privacy implications. Even without your consent, the electric company and the water company are permitted to continuously measure your use of electricity and water; taken to the extreme, this monitoring alone could tell them exactly when you use each and every device in your house.