August 18, 2017

On Encryption, Archiving, and Accountability

As Elites Switch to Texting, Watchdogs Fear Loss of Accountability“, says a headline in today’s New York Times. The story describes a rising concern among rule enforcers and compliance officers:

Secure messaging apps like WhatsApp, Signal and Confide are making inroads among lawmakers, corporate executives and other prominent communicators. Spooked by surveillance and wary of being exposed by hackers, they are switching from phone calls and emails to apps that allow them to send encrypted and self-destructing texts. These apps have obvious benefits, but their use is causing problems in heavily regulated industries, where careful record-keeping is standard procedure.

Among those “industries” is the government, where laws often require that officials’ work-related communications be retained, archived, and available to the public under the Freedom of Information Act. The move to secure messaging apps frustrates these goals.

The switch to more secure messaging is happening, and for good reason, because old-school messages are increasingly vulnerable to compromise–the DNC and the Clinton campaign are among the many organizations that have paid a price for underestimating these risks.

The tradeoffs here are real. But this is not just a case of choosing between insecure-and-compliant or secure-and-noncompliant. The new secure apps have three properties that differ from old-school email: they encrypt messages end-to-end from the sender to the receiver; they sometimes delete messages quickly after they are transmitted and read; and they are set up and controlled by the end user rather than the employer.

If the concern is lack of archiving, then the last property–user control of the account, rather than employer control–is the main problem. And of course that has been a persistent problem even with email. Public officials using their personal email accounts for public business is typically not allowed (and when it happens by accident, messages are supposed to be forwarded to official accounts so they will be archived), but unreported use of personal accounts has been all too common.

Much of the reporting on this issue (but not the Times article) makes the mistake of conflating the personal-account problem with the fact that these apps use encryption. There is nothing about end-to-end encryption of data in transit that is inconsistent with archiving. The app could record messages and then upload them to an archive–with this upload also protected by end-to-end encryption as a best practice.

The second property of these apps–deleting messages shortly after use–has more complicated security implications. Again, the message becoming unavailable to the user shortly after use need not conflict with archiving. The message could be uploaded securely to an archive before deleting it from the endpoint device.

You might ask why the user should lose access to a message when that message is still stored in an archive. But this makes some sense as a security precaution. Most compromises of communications happen through the user’s access, for example because an attacker can get the user’s login credentials by phishing. Taking away the user’s access, while retaining access in a more carefully guarded archive, is a reasonable security precaution for sensitive messages.

But of course the archive still poses a security risk. Although an archive ought to be more carefully protected than a user account would be, the archive is also a big, high-value target for attackers. The decision to create an archive should not be taken lightly, but it may be justified if the need for accountability is strong enough and the communications are not overly sensitive.

The upshot of all of this is that the most modern, secure approaches to secure communication are not entirely incompatible with the kind of accountability needed for government and some other users.  Accountable versions of these types of services could be created. These would be less secure than the current versions, but more secure than old-school communications. The barriers to creating these are institutional, not technical.

European authorities fine Google for search tactics

This week the European Commission (EC) announced that it is fining Google $2.7 billion for anti-competitive tactics in the company’s iconic search product. In this post I’ll unpack what’s going on here.

I have some background on this topic. In 2011-12, when I was Chief Technologist at the FTC, the agency did a big investigation on this same topic. The FTC eventually decided not to bring a case against Google for this behavior. The EC has now reached a different conclusion.

The EC makes two main claims. First, they claim that Google dominates the search engine market in Europe–it’s pretty hard to argue with that.  Second, they claim Google designed its dominant search product in ways that unfairly advantage the company’s own Google Shopping product and unfairly disadvantage competing comparison shopping products.

Competition law is complicated, and I won’t presume to offer any legal analysis. But the basic principles motivating competition policy are not too complicated. Fair competition is encouraged. If your business grows because you improve your product, or manage your operations well, or negotiate shrewdly, or simply happen to be in the right place at the right time, that’s all good. If you amass dump trucks full of money doing this, then good for you, and thank you for your tax dollars. That’s how capitalism is supposed to work.

But if your effort is devoted to preventing fair competition, then you are probably harming consumers, and that’s a competition policy problem.  To see the difference, suppose you’re in the business of delivering packages to people’s homes.  Fair competition means buying better trucks, optimizing routes and schedules, hiring better employees, and so on.  But if you send out employees to block your competitors’ trucks, that is an anticompetitive tactic.

Now back to Google. The EC says that when users do searches relevant to shopping, Google gives its own Google Shopping product preferred placement in the search results–and higher placement leads to more clicks and more sales–while demoting competing shopping products in the search results. These two claims, self-promotion and competitor-demotion, may sound similar at first, but they raise different issues for us in understanding the case, so let’s look at them separately.

On the self-promotion claim, we know the relevant facts. On shopping-relevant searches, Google puts a box at or near the top of the search results, showing Google Shopping results with images of items for sale. That is a valuable benefit that Google Search is giving to the Google Shopping product. Is this anticompetitive? Google’s strongest argument to the contrary is that the Shopping box is essentially an ad, and Google already places ads at the top of the page. If Google auctioned that space off to the highest bidder for advertising, nobody would object. So why is it a problem if Google gives that advertising space to Google Shopping? The company could make a symbolic payment to itself to buy the space, if that made a difference to anybody.

The competitor-demotion claim is very different–the theory is less complicated, but the analysis depends more on facts not available to the public. If Google is gratuitously demoting its shopping competitors in search results, that is problematic. But Google says it is not doing that–it says that those competitors’ placements arise naturally from a search ranking algorithm based on design decisions that the company made for legitimate, pro-consumer reasons.

It’s hard for the public to tell who is right. Google’s ranking algorithm is complicated, and it changes constantly, as the Web changes and as Google works to counter sites’ attempts to game the algorithm. Is there evidence that Google tweaked the algorithm with the goal of demoting shopping competitors? Did the company make algorithm changes for the wrong reasons, or did suspicious changes happen outside the normal process? These questions are answerable in principle, but only by looking at the company’s internal information, which the EC might have but we, the public, do not.

At this point, I need to put some of my cards on the table and admit that I know more about this topic, having worked on the FTC’s investigation which asked some of the same questions. But that investigation was confidential, for good reasons, and I will not violate that confidentiality. All I’ll say is that the FTC had the legal power to compel answers to factual questions about Google’s practices (and an obligation to keep the answers confidential) and, having conducted a thorough investigation, the FTC decided not to bring a case against Google.

So why did the European authorities get a different result than the U.S. authorities? The answer might lie in differences between European and American competition law. Or it might lie in the fact that European authorities find it easier to enforce against a foreign company. Regardless of the reason, Google is presumably looking for ways to resolve the complaints that led to this investigation being started.

 

Killing car privacy by federal mandate

The US National Highway Traffic Safety Administration (NHTSA) is proposing a requirement that every car should broadcast a cleartext message specifying its exact position, speed, and heading ten times per second. In comments filed in April, during the 90-day comment period, we (specifically, Leo Reyzin, Anna Lysyanskaya, Vitaly Shmatikov, Adam Smith, together with the CDT via Joseph Lorenzo Hall and Joseph Jerome) argued that this requirement will result in a significant loss to privacy. Others have aptly argued that the proposed system also has serious security challenges and cannot prevent potentially deadly malicious broadcasts, and that it will be outdated before it is deployed. In this post I focus on privacy, though I think security problems and resulting safety risks are also important to consider.

The basic summary of the proposal, known as Dedicated Short Range Communication (DSRC), is as follows. From the moment a car turns on and every tenth of a second until it shuts off, it will broadcast a so-called “basic safety message” (BSM) to within a minimum distance of 300m. The message will include position (with accuracy of 1.5m), speed, heading, acceleration, yaw rate, path history for the past 300m, predicted path curvature, steering wheel angle, car length and width rounded to 20cm precision, and a few other indicators. Each message will also include a temporary vehicle id (randomly generated and changed every five minutes), to enable receivers to tell whether they are hearing from the same car or from different cars.

Under the proposal, each message will be digitally signed. Each car will be provisioned with 20 certificates (and corresponding secret keys) per week, and will cycle through these certificates during the week, using each one for five minutes at a time. Certificates will be revocable; revocation is meant to guard against incorrect (malicious or erroneous) information in the broadcast messages, though there is no concrete proposal for how to detect such incorrect information.

It is not hard to see that if such a system were to be deployed, a powerful antenna could easily listen to messages from well over the 300m design radius (we’ve seen examples of design range being extended by two or three orders of magnitude through the use of good antennas with bluetooth and wifi). Combining data from several antennas, one could easily link messages together, figuring out where each car was parked, what path it took, and where it ended up. This information will often enable one to link the car to an individual–for example, by looking at the address where the car is parked at night.

The fundamental privacy problem with the proposal is that messages can be linked together even though they have no long-term ids. The linking is simplest, of course, when the temporary id does not change, which makes it easy to track a car for five minutes. When the temporary id changes, two consecutive messages can be easily linked using the high-precision position information they contain. One also doesn’t have to observe the exact moment that the temporary id changes: it is possible to link messages by a variety of so-called “quasi-identifiers,” such as car dimensions; position in relation to other cars; the relationship between acceleration, steering wheel angle, and yaw, which will differ for different models; variability in how different models calculate path history; repeated certificates; etc. You can read more about various linking methods in our comments; and in comments by the EFF.

Thus, by using an antenna and a laptop, one could put a neighborhood under ubiquitous real-time surveillance — a boon to stalkers and burglars. Well-resourced companies, crime bosses, and government agencies could easily surveill movements of a large population in real time for pennies per car per year.

To our surprise, the NHTSA proposal did not consider the cost of lost privacy in its cost-benefit analysis; instead, it considered only “perceived” privacy loss as a cost. The adjective “perceived” in this context is a convenient way to dismiss privacy concerns as figments of imagination, despite the fact that NHTSA-commissioned analysis found that BSM-based tracking would be quite easy.

What about the safety benefits of proposed technology? Are they worth the privacy loss? As the EFF and Brad Templeton (among others) have argued, the proposed mandate will take away money from other safety technologies that are likely to have broader applications and raise fewer privacy concerns. The proposed technology is already becoming outdated, and will be even more out of date by the time it is deployed widely enough to make any difference.

But, you may object, isn’t vehicle privacy already dead? What about license plate scanners, cell-phone-based tracking, or aerial tracking from drones? Indeed, all of these technologies are a threat to vehicle privacy. None of them, however, permits tracking quite as cheaply, undetectably, and pervasively. For example, license-plate scanners require visual contact and are more conspicuous that a hidden radio antenna would be. A report commissioned by NHTSA concluded that other approaches did not seem practical for aggregate tracking.

Moreover, it is important to avoid the fallacy of relative privation: even if there are other ways of tracking cars today, we should not add one more, which will be mandated by the government for decades to come. To fix existing privacy problems, we can work on technical approaches for making cell phones harder to track or on regulatory restrictions on the use of license plate scanners. Instead of creating new privacy problems that will persist for decades, we should be working on reducing the ones that exist.