April 21, 2014

avatar

Workshop: Computing in the Cloud

I’m excited to announce that Princeton’s Center for InfoTech Policy is putting on a workshop on the policy and social implications of “Computing in the Cloud” – the trend where companies, rather than users, store and manage an increasing range of personal data.

Examples include Hotmail and Gmail replacing desktop email, YouTube taking over as a personal video platform, and Flickr competing with desktop photo storage solutions. Facebook, Myspace and other social networks have pioneered new kinds of tools that couldn’t exist on the desktop, and more new models are sure to emerge.

I’m confident that this trend will reshape tech policy, and will change how people relate to technology. But I don’t know what the changes are. By drawing together experts from computer science, industry, government and law, I hope the Center can help those of us at Princeton, and workshop participants from around the country, get a better sense of where things might be headed.

The workshop will be held on the Princeton campus on January 14 and 15, 2008. It will be free and open to the public. We will have a series of panel discussions, interspersed with opportunities for informal exchanges of ideas. We’re still putting together the list of panels and panelists, so we haven’t yet published a schedule. If you’re interested in attending or want to get email updates about the workshop, please email David Robinson (dgr at princeton dot edu).

Here are some of the possible themes for panels we are exploring:

  • Possession and ownership of data: In cloud computing, a provider’s data center holds information that would more traditionally have been stored on the end user’s computer. How does this impact user privacy? To what extent do users “own” this data, and what obligations do the service providers have? What obligations should they have? Does moving the data to the provider’s data center improve security or endanger it?
  • Collaboration and globalization: Cloud computing systems offer new sharing and collaboration features beyond what was possible before. They make shared creation among far-flung users easier, allow or require data to be stored in many different jurisdictions, and give users access to offerings that may be illegal in the users’ home countries. How will local laws, when applied to data centers whose user base is global, affect users practice? Do these services drive forward economic growth — and if so, what effect should that fact have on the policy debate?
  • New roles for new intermediaries: Cloud services often involve new
    intermediaries such as Facebook, MySpace, eBay, and Second Life, standing between people who might have interacted more directly before these services emerged. To what extent are these services “communities”, as their providers claim? How much control do users feel over these communities? How much control do and should users actually have? How does the centralized nature of these intermediaries affect the efficiency and diversity of online experiences? Can the market protect consumers and competition, or is government oversight needed?
  • What’s next: What new services might develop, and how will today’s services evolve? How well will cloud computing be likely to serve users, companies, investors, government, and the public over the longer run? Which social and policy problems will get worse due to cloud computing, and which will get better?
avatar

Slysoft Commercializes Next-Gen DVD Circumvention

We’ve been following, off and on, the steady meltdown of AACS, the encryption scheme used in HD-DVD and Blu-ray, the next-generation DVD systems. By this point, Hollywood has released four generations of AACS-encoded discs, each encrypted with different secret keys; and the popular circumvention tools can still decrypt them all. The industry is stuck on a treadmill: they change keys every ninety days, and attackers promptly reverse-engineer the new keys and carry on decrypting discs.

One thing that has changed is the nature of the attackers. In the early days, the most effective reverse engineers were individuals, communicating by email and pseudonymous form posts. Their efforts resulted in rough but workable circumvention tools. In recent months, though, circumvention has gone commercial, with Slysoft, an Antigua-based maker of DVD-reader software, taking the lead and offering more polished tools for reading and ripping AACS discs.

You might wonder how a company that makes software for playing DVDs got into the circumvention business. The answer has to do with AACS’s pickiness about which equipment it will work with. My lab, for example, has an HD-DVD drive and some discs, which we have used for research purposes. But as far as I know, none of the computer monitors we own are AACS-approved, so we have no way to watch our lawfully purchased HD-DVDs on our lawfully purchased equipment. Many customers face similar problems.

If you’re selling HD-DVD player software, you can tell those customers that your product is incompatible with their equipment. Or you can solve their problem and make their legitimately purchased discs play on their legitimately purchased equipment. Of course, this will make you persona non grata in Hollywood, so you had better hire a few reverse engineers and get to work on some unauthorized decryption software – which seems to be what Slysoft did.

Now Slysoft faces the same reverse engineering challenges that Hollywood did. If Slysoft’s products contain the secrets to AACS decryption, then independent analysts can extract those secrets and clone Slysoft’s AACS decryption capability. Will those who live by reverse engineering die by reverse engineering?

avatar

Further adventures in personal credit

In our last installment, I described how one of the mortgage vendors who I was considering for the loan for my new home failed to trigger the credit alerting mechanism (Debix) to which I was signed up. Since then, I’ve learned several interesting facts. First, the way that Debix operates is that they insert a line into your credit reports which says, in effect, “you, the reader of this line, are required to call this 1-800 telephone number, prior to granting credit based on what you see in this report.” That 800-number finds its way to Debix, where a robot answers the phone and asks the human who called it for their name/organization, and the purpose of the request. Then, the Debix robot calls up their customer and asks permission to authorize the request, playing back the recordings made earlier.

The only thing that makes this “mandatory” is a recent law (sorry, I don’t have the citation handy) which specifies how lenders and such are required to act when they see one of these alerts in a credit report. The mechanism, aside from legal requirements, is otherwise used at the discretion of a human loan officer. This leads me to wonder whether or not the mechanism works when there isn’t a human loan officer involved. I may just need to head over to some big box store and purchase myself something with an in-store instant-approval credit card, just to see what happens. (With my new house will inevitably come a number of non-trivial expenses, and oh what great savings I can get with those insta-credit cards!)

So does the mechanism work? Yesterday morning, as I was getting into the car to go to work, my cell phone rang with an 800-number as the caller-ID. “Hello?” It was the Debix robot, asking for my approval. Debix played a recording of an apparently puzzled loan officer who identified herself as being from the bank that, indeed, I’m using for my loan. Well that’s good. Could the loan officer have been lying? Unlikely. An identity thief isn’t really the one who gets to see the 800-number. It’s the loan officer of the bank that the identity thief is trying to defraud who then makes the call. That means our prospective thief would need to guess the proper bank to use that would fool me into giving my okay. Given the number of choices, the odds of the thief nailing it on the first try are pretty low. (Unless our prospective thief is clever enough to have identified a bank that’s too lazy to follow the proper procedure and call the 800-number; more on this below).

A side-effect of my last post was that it got noticed by some people inside Debix and I ended up spending some quality time with one of their people on the telephone.  They were quite interested in my experiences.  They also told me, assuming everything is working right, that there will be some additional authentication hoops that the lender is (legally) mandated to jump through between now and when they actually write out the big check. Our closing date is next week, Friday, so I should have one more post when it’s all over to describe how all of that worked in the end.

Further reading: The New York Times recently had an article (“In ID Theft, Some Victims See an Opportunity“, November 16, 2007) discussing Debix and several other companies competing in the same market. Here’s an interesting quote:

Among its peers, LifeLock has attracted the most attention — much of it negative. In radio and television ads, Todd Davis, chief executive of LifeLock, gives out his Social Security number to demonstrate his faith in the service. As a result, he has been hit with repeated identity theft attacks, including one successful effort this summer in which a check-cashing firm gave out a $500 loan to a Texas fraudster without ever checking Mr. Davis’s credit report.

Sure enough, if you go to LifeLock’s home page, you see Mr. Davis’s social security number, right up front. And, unsurprisingly, he fell victim because, indeed, fraudsters identified a loan organization that didn’t follow the (legally) mandated protocol.

How do we solve the problem? Legally mandated protocols need to become technically mandatory protocols. The sort of credit alerts placed by Debix, LifeLock, and others need to be more than just a line in the consumer’s credit file. Instead, the big-3 credit bureaus need to be (legally) required not to divulge anything beyond the credit-protection vendor’s 800-number without the explicit (technical) permission of the vendor (on behalf of the user). Doing this properly would require the credit bureaus to standardize and implement a suitable Internet-based API with all the right sorts of crypto authentication and so forth – nothing technically difficult about that. Legally, I’d imagine they’d put up more of a fight, since they may not like these startups getting in the way of their business.

The place where the technical difficulty would ramp up is that the instant-credit-offering big-box stores would want to automate their side of the phone robot conversation. That would then require all these little startups to standardize their own APIs, which seems difficult when they’re all still busily inventing their own business models.

(Sidebar: I set up this Debix thing months ago. Then I get a phone call, out of the blue, that asked me to remember my PIN. Momentary panic: what PIN did I use? Same as the four-digit one I use for my bank ATM? Same as the six-digit one I uses for my investment broker? Same as the four-digit one used by my preferred airline’s frequent flyer web site which I can’t seem to change? Anyway, I guessed right. I’d love to know how many people forget.)

avatar

Radiohead's Low Price Might Mean Higher Profit

Radiohead’s name-your-own-price sale of its new In Rainbows album has generated lots of commentary, especially since comscore released data claiming that 62% of customers set their price at zero, with the remaining 38% setting an average price of $6, which comes to an average price of $2.28 per customer. (There are reasons to question these numbers, but let’s take them as roughly accurate for the sake of argument.)

Bill Rosenblatt bemoaned the low price, calling it a race to the bottom. Tim Lee responded by pointing out that Rosenblatt’s “race to the bottom” is just another name for price competition, which is hardly a sign of an unhealthy market. The music market is more competitive than before, and production costs are lower, so naturally prices will go down.

But there’s another basic economic point missing in this debate: Lower average price does not imply lower profit. Radiohead may well be making more money because the price is lower.

To see why this might be true, imagine that there are 10 customers willing to pay $10 for your album, 100 customers willing to pay only $2, and 1000 customers who will only listen if the price is zero. (For simplicity assume the cost of producing an extra copy is zero.) If you price the album at $10, you get ten buyers and make $100. If you price it at $2, you get 110 buyers and make $220. Lowering the price makes you more money.

Or you can ask each customer to name their own price, with a minimum of $2. If all customers pay their own valuation, then you get $10 from 10 customers and $2 from 100 customers, for a total of $300. You get perfect price discrimination – each customer pays his own valuation – which extracts the maximum possible revenue from these 110 customers.

Of course, in real life some customers who value the album at $10 will name a price of $2, so your revenue won’t reach the full $300. But if even one customer pays more than $2, you’re still better off than you’d be with any fixed price. Your price discrimination is imperfect, but it’s still better than not discriminating at all.

Now imagine that you can extract some nonzero amount of revenue from the customers who aren’t willing to pay at all, perhaps because because listening will make them more likely to buy your next album or recommend it to their friends. If you keep the name-your-own-price deal, and remove the $2 minimum, then you’ll capture this value because customers can name a price of zero. Some of the $10-value or $2-value people might also name a price of zero, but if not too many do so you might be better off removing the minimum and capturing some value from every customer.

If customers are honest about their valuation, this last scenario is the most profitable – you make $300 immediately plus the indirect benefit from the zero-price listeners. Some pundits will be shocked and saddened that your revenue is only 27 cents per customer, and 90% of your customers paid nothing at all. But you won’t care – you’ll be too busy counting your money.

Finally, note that none of this analysis depends on any assumptions about customers’ infringement options. Even if it were physically impossible to make infringing copies of the album, the analysis would still hold because it depends only on how badly customers want to hear your music and how likely they are to name a price close to their true valuation. Indeed, factoring in the possibility of infringement only strengthens the argument for lowering the average price.

By all accounts, Radiohead’s album is a musical and financial success. Sure, it’s a gimmick, but it could very well be a smart pricing strategy.

avatar

Verizon Violates Net Neutrality with DNS Deviations

While many of us were discussing Comcast’s partial blocking of BitTorrent Traffic, and debating its implications for the net neutrality debate, a more clear-cut neutrality violation was apparently taking place on Verizon’s network – a redirection of Verizon customers’ failed DNS lookups, to drive traffic to Verizon’s own search engine.

Here’s the background. Suppose you’re browsing the web and you mistype an address – say you type “fredom-to-tinker”. Your browser will try to use DNS, the system that maps textual machine names to numeric IP addresses, to translate the name you typed into an address it can actually connect to across the Net. DNS will return an error, saying that the requested name doesn’t exist. Your browser (if it’s a recent version of IE or Firefox) will respond by doing a search for the text you typed, using your default search engine.

What Verizon did is to change how DNS works (for their residential subscribers) so that when a customer’s computer looks up a DNS name that doesn’t exist, rather than returning the name-doesn’t-exist error DNS says that the (non-existent) name maps to Verizon’s search site. This causes the browser to go to the Verizon search site, which shows the user search results (and ads) related to what they typed.

(This is the same trick used by VeriSign’s ill-fated SiteFinder service a few years ago.)

This is a clear violation of net neutrality: Verizon is interfering with the behavior of the DNS protocol, in order to drive traffic to its own search site. And unlike the Comcast scenario which might possibly have been justifiable as legitimate network management, in this case Verizon cannot claim to be helping its network run more smoothly.

Verizon’s actions have two effects. The obvious effect is to drive traffic from the search engines users chose to Verizon’s own search engine. That harms users (by overriding their choices) and harms browser vendors (by degrading their users’ experiences).

The less obvious effect is to break some other applications. DNS lookups that have nothing to do with browsing will still be redirected, because the DNS infrastructure has no way of knowing which requests relate to browsing and which don’t. So if some other application does a DNS lookup and the result should be a not-found error, Verizon will cause the result to point to a Verizon server instead. If a non-browser program expects to see not-found errors sometimes and has a strategy for dealing with them, it won’t be able to carry out that strategy because it won’t see the errors it should be seeing. This will even cause browsers to misbehave in some circumstances.

The effects of Verizon’s neutrality violation can be summarized simply: they interfer with a standard technical protocol; they cause harm on the whole, in part by breaking unrelated services; and they do this in order to override consumer choice by shifting traffic from consumer-chosen services to Verizon’s own services. This is pretty much the definition of a net neutrality violation.

This example contradicts at least two of the standard arguments against net neutrality regulation. First, it shows that violations do happen, and they do cause harm. Second, it shows that at least sometimes it’s easy to tell a harmful violation apart from legitimate network management.

But it doesn’t defeat all of the arguments against net neutrality regulation. Even though violations do occur, and do cause harm, it might turn out that the regulatory cure is worse than the disease.

avatar

How Can Government Improve Cyber-Security?

Wednesday was the kickoff meeting of the Commission on Cyber Security for the 44th Presidency, of which I am a member. The commissionhas thirty-four members and has four co-chairs: Congressmen Jim Langevin and Michael McCaul, Admiral Bobby Inman, and Scott Charney. It was organized by the Center for Strategic and International Studies, a national security think tank in Washington. Our goal is to provide advice about cyber-security policy to the next presidential administration. Eventually we’ll produce a report with our findings and recommendations.

I won’t presume to speak for my fellow members, and it’s way too early to predict the contents of our final report. But the meeting got me thinking about what government can do to improve cyber-security. I’ll offer a few thoughts here.

One of the biggest challenges comes from the broad and porous border between government systems and private systems. Not only are government computers networked pervasively to privately-owner computers; but government relies heavily on off-the-shelf technologies whose characteristics are shaped by the market choices of private parties. While it’s important to better protect the more isolated, high-security government systems, real progress elsewhere will depend on ordinary technologies getting more secure.

Ordinary technologies are designed by the market, and the market is big and very hard to budge. I’ve written before about the market failures that cause security to be under-provided. The market, subject to these failures, controls what happens in private systems, and in practice also in ordinary government systems.

To put it another way, although our national cybersecurity strategy might be announced in Washington, our national cybersecurity practice will be defined in the average Silicon Valley cubicle. It’s hard to see what government can do to affect what happens in that cubicle. Indeed, I’d judge our policy as a success if we have any positive impact, no matter how small, in the cubicle.

I see three basic strategies for doing this. First, government can be a cheerleader, exhorting people to improve security, convening meetings to discuss and publicize best practices, and so on. This is cheap and easy, won’t do any harm, and might help a bit at the margin. Second, government can use its purchasing power. In practice this means deliberately overpaying for security, to boost demand for higher-security products. This might be expensive, and its effects will be limited because the majority of buyers will still be happy to pay less for less secure systems. Third, government can invest in human capital, trying to improve education in computer technology generally and computer security specifically, and supporting programs that train researchers and practitioners. This last strategy is slow but I’m convinced it can be effective.

I’m looking forward to working through these problems with my fellow commission members. And I’m eager to hear what you all think.

avatar

Comcast Podcast

Recently I took part in a Technology Liberation Front podcast about the Comcast controversy, with Adam Thierer, Jerry Brito, Richard Bennett, and James L. Gattuso. There’s now a (slightly edited) transcript online.

avatar

Economics of Eavesdropping For Pay

Following up on Andrew’s post about eavesdropping as a profit center for telecom companies, let’s take a quick look at the economics of eavesdropping for money. We’ll assume for the sake of argument that (1) telecom (i.e. transporting bits) is a commodity so competition forces providers to sell it essentially at cost, (2) the government wants to engage in certain eavesdropping and/or data mining that requires cooperation from telecom providers, (3) cooperation is optional for each provider, and (4) the government is willing to pay providers to cooperate.

A few caveats are in order. First, we’re not talking about situations, such as traditional law enforcement eavesdropping pursuant to a warrant, where the provider is compelled to cooperate. Providers will cooperate in those situations, as they should. We’re only talking about additional eavesdropping where the providers can choose whether to cooperate. Second, we don’t care whether the government pays for cooperation or threatens retaliation for non-cooperation – either way the provider ends up with more money if it cooperates. Finally, we’re assuming that the hypothetical surveillance or data mining program, and the providers’ participation in it, is lawful; otherwise the law will (eventually) stop it. With those caveats out of the way, let the analysis begin.

Suppose a provider charges each customer an amount P for telecom service. The provider makes minimal profit at price P, because by assumption telecom is a commodity. The government offers to pay the provider an amount E per customer if the provider allows surveillance. The provider has two choices: accept the payment and offer service with surveillance at a price of P-E, or refuse the payment and offer reduced-surveillance service at price P. A rational provider will do whatever it thinks its customers prefer: Would typical customers rather save E, or would they rather avoid surveillance?

In this scenario, surveillance isn’t actually a profit center for the provider – the payment, if accepted, gets passed on to customers as a price discount. The provider is just an intermediary; the customers are actually deciding.

But of course the government won’t allow each customer to make an individual decision whether to allow surveillance – then the bad guys could pay extra to avoid being watched. If enough customers prefer for whatever reason to avoid surveillance (at a cost of E), then some provider will emerge to serve them. So the government will have to set E large enough that the number of customers who would refuse the payment is not large enough to support even one provider. This implies a decent-sized value for E.

But there’s another possibility. Suppose a provider claims to be refusing the payment, but secretly accepts the payment and allows surveillance of its customers. If customers fall for the lie, then the provider can change P while pocketing the government payment E. Now surveillance is a profit center for the provider, as long as customers don’t catch on.

If customers know that producers might be lying, savvy customers will discount a producer’s claim to be refusing the payments. So the premium customers are willing to pay for (claims of) avoiding surveillance will be smaller, and government can buy more surveillance more cheaply.

The incentives here get pretty interesting. Government benefits by undermining providers’ credibility, as that lowers the price government has to pay for surveillance. Providers who are cooperating with the government want to undermine their fellow providers’ credibility, thereby making customers less likely to buy from surveillance-resisting providers. Providers who claim, truthfully or not, to be be refusing surveillance want to pick fights with the government, making it look less likely that they’re cooperating with the government on surveillance.

If government wants to use surveillance, why doesn’t it require providers to cooperate? That’s a political question that deserves a post of its own.