November 21, 2024

Privacy and the Commitment Problem

One of the challenges in understanding privacy is how to square what people say about privacy with what they actually do. People say they care deeply about privacy and resent unexpected commercial use of information about them; but they happily give that same information to companies likely to use and sell it. If people value their privacy so highly, why do they sell it for next to nothing?

To put it another way, people say they want more privacy than the market is producing. Why is this? One explanation is that actions speak louder than words, people don’t really want privacy very much (despite what they say), and the market is producing an efficient level of privacy. But there’s another possibility: perhaps a market failure is causing underproduction of privacy.

Why might this be? A recent Slate essay by Reihan Salam gives a clue. Salam talks about the quandry faced by companies like the financial-management site Wesabe. A new company building up its business wants to reassure customers that their information will be treated with the utmost case. But later, when the company is big, it will want to monetize the same customer information. Salam argues that these forces are in tension and few if any companies will be able to stick with their early promises to not be evil.

What customers want, of course, is not good intentions but a solid commitment from a company that it will stay privacy-friendly as it grows. The problem is that there’s no good way for a company to make such a commitment. In principle, a company could make an ironclad legal commitment, written into a contract with customers. But in practice customers will have a hard time deciphering such a contract and figuring out how much it actually protects them. Is the contract enforceable? Are there loopholes? The average customer won’t have a clue. He’ll do what he usually does with a long website contract: glance briefly at it, then shrug and click “Accept”.

An alternative to contracts is signaling. A company will say, repeatedly, that its intentions are pure. It will appoint the right people to its advisory board and send its executives to say the right things at the right conferences. It will take conspicuous, almost extravagant steps to be privacy-friendly. This is all fine as far as it goes, but these signals are a poor substitute for a real commitment. They aren’t too difficult to fake. And even if the signals are backed by the best of intentions, everything could change in an instant if the company is acquired – a new management team might not share the original team’s commitment to privacy. Indeed, if management’s passion for privacy is holding down revenue, such an acquisition will be especially likely.

There’s an obvious market failure here. If we postulate that at least some customers want to use web services that come with strong privacy commitments (and are willing to pay the appropriate premium for them), it’s hard to see how the market can provide what they want. Companies can signal a commitment to privacy, but those signals will be unreliable so customers won’t be willing to pay much for them – which will leave the companies with little incentive to actually protect privacy. The market will underproduce privacy.

How big a problem is this? It depends on how many customers would be willing to pay a premium for privacy – a premium big enough to replace the revenue from monetizing customer information. How many customers would be willing to pay this much? I don’t know. But I do know that people might care a lot about privacy, even if they’re not paying for privacy today.

Scoble/Facebook Incident: It's Not About Data Ownership

Last week Facebook canceled, and then reinstated, Robert Scoble’s account because he was using an automated script to export information about his Facebook friends to another service. The incident triggered a vigorous debate about who was in the right. Should Scoble be allowed to export this data from Facebook in the way he did? Should Facebook be allowed to control how the data is presented and used? What about the interests of Scoble’s friends?

An interesting meme kept popping up in this debate: the idea that somebody owns the data. Kara Swisher says the data belong to Scoble:

Thus, [Facebook] has zero interest in allowing people to escape easily if they want to, even though THE INFORMATION ON FACEBOOK IS THEIRS AND NOT FACEBOOK’S.

Sorry for the caps, but I wanted to be as clear as I could: All that information on Facebook is Robert Scoble’s. So, he should–even if he agreed to give away his rights to move it to use the service in the first place (he had no other choice if he wanted to join)–be allowed to move it wherever he wants.

Nick Carr disagrees, saying the data belong to Scoble’s friends:

Now, if you happen to be one of those “friends,” would you think of your name, email address, and birthday as being “Scoble’s data” or as being “my data.” If you’re smart, you’ll think of it as being “my data,” and you’ll be very nervous about the ability of someone to easily suck it out of Facebook’s database and move it into another database without your knowledge or permission. After all, if someone has your name, email address, and birthday, they pretty much have your identity – not just your online identity, but your real-world identity.

Scott Karp asks whether “Facebook actually own your data because you agreed to that ownership in the Terms of Service.” And Louis Gray titles his post “The Data Ownership Wars Are Heating Up”.

Where did we get this idea that facts about the world must be owned by somebody? Stop and consider that question for a minute, and you’ll see that ownership is a lousy way to think about this issue. In fact, much of the confusion we see stems from the unexamined assumption that the facts in question are owned.

It’s worth noting, too, that even today’s expansive intellectual property regimes don’t apply to the data at issue here. Facts aren’t copyrightable; there’s no trade secret here; and this information is outside the subject matter of patents and trademarks.

Once we give up the idea that the fact of Robert Scoble’s friendship with (say) Lee Aase, or the fact that that friendship has been memorialized on Facebook, has to be somebody’s exclusive property, we can see things more clearly. Scoble and Aase both have an interest in the facts of their Facebook-friendship and their real friendship (if any). Facebook has an interest in how its computer systems are used, but Scoble and Aase also have an interest in being able to access Facebook’s systems. Even you and I have an interest here, though probably not so strong as the others, in knowing whether Scoble and Aase are Facebook-friends.

How can all of these interests best be balanced in principle? What rights do Scoble, Aase, and Facebook have under existing law? What should public policy says about data access? All of these are difficult questions whose answers we should debate. Declaring these facts to be property doesn’t resolve the debate – all it does is rule out solutions that might turn out to be the best.