One of the challenges in understanding privacy is how to square what people say about privacy with what they actually do. People say they care deeply about privacy and resent unexpected commercial use of information about them; but they happily give that same information to companies likely to use and sell it. If people value their privacy so highly, why do they sell it for next to nothing?
To put it another way, people say they want more privacy than the market is producing. Why is this? One explanation is that actions speak louder than words, people don’t really want privacy very much (despite what they say), and the market is producing an efficient level of privacy. But there’s another possibility: perhaps a market failure is causing underproduction of privacy.
Why might this be? A recent Slate essay by Reihan Salam gives a clue. Salam talks about the quandry faced by companies like the financial-management site Wesabe. A new company building up its business wants to reassure customers that their information will be treated with the utmost case. But later, when the company is big, it will want to monetize the same customer information. Salam argues that these forces are in tension and few if any companies will be able to stick with their early promises to not be evil.
What customers want, of course, is not good intentions but a solid commitment from a company that it will stay privacy-friendly as it grows. The problem is that there’s no good way for a company to make such a commitment. In principle, a company could make an ironclad legal commitment, written into a contract with customers. But in practice customers will have a hard time deciphering such a contract and figuring out how much it actually protects them. Is the contract enforceable? Are there loopholes? The average customer won’t have a clue. He’ll do what he usually does with a long website contract: glance briefly at it, then shrug and click “Accept”.
An alternative to contracts is signaling. A company will say, repeatedly, that its intentions are pure. It will appoint the right people to its advisory board and send its executives to say the right things at the right conferences. It will take conspicuous, almost extravagant steps to be privacy-friendly. This is all fine as far as it goes, but these signals are a poor substitute for a real commitment. They aren’t too difficult to fake. And even if the signals are backed by the best of intentions, everything could change in an instant if the company is acquired – a new management team might not share the original team’s commitment to privacy. Indeed, if management’s passion for privacy is holding down revenue, such an acquisition will be especially likely.
There’s an obvious market failure here. If we postulate that at least some customers want to use web services that come with strong privacy commitments (and are willing to pay the appropriate premium for them), it’s hard to see how the market can provide what they want. Companies can signal a commitment to privacy, but those signals will be unreliable so customers won’t be willing to pay much for them – which will leave the companies with little incentive to actually protect privacy. The market will underproduce privacy.
How big a problem is this? It depends on how many customers would be willing to pay a premium for privacy – a premium big enough to replace the revenue from monetizing customer information. How many customers would be willing to pay this much? I don’t know. But I do know that people might care a lot about privacy, even if they’re not paying for privacy today.
Rachel Greenstadt had a really interesting paper developing this idea a few years back, and introduced attention economics as well:
Why We Can’t Be Bothered To Read Privacy Policies: Privacy as a Lemons Market,
Tony Vila, Rachel Greenstadt, David Molnar
Fifth International Conference on Electronic Commerce (ICEC 2003), Pittsburgh, PA, October 2003.
http://www.eecs.harvard.edu/~greenie/econprivacy.pdf
Dan,
Your sarcasm aside, you are missing the point. I remember Radio Shack, back in the 90’s, required your name and phone number even to make a cash purchase. So cash is not necessarily the total anonymity that you suggest. There are also many things that you cannot get with cash, such as a rental car. Buying an airplane ticket with cash raises red flags, although it is still legal and possible. Cash is not a panacea.
You are also making a false dichotomy: Either I value my privacy and use cash, or I don’t value my privacy and I use credit cards, checks, whatever. The real world is not binary. Also, many goods can be found online for substantial bargains, sometimes used (such as at EBay) when I don’t need something new, sometimes at a seller who specializes in a niche market, buys in bulk, and passes the savings along, sometimes simply because the online merchant does not have to pay costs for a storefront and the personnel to staff it.
You are trying to take a complex issue and make it binary, and you can only do that by throwing out many essential details.
Ed, you haven’t discovered a new market failure, merely another instance of a fairly well understood market failure. Generally described by the example of “oranges” and “lemons” in the used car market.
http://www.bized.co.uk/educators/16-19/economics/marketfail/activity/principal11.htm
In essence: you can’t outsource trust.
When I provide personal information as required to make a purchase, however, I am not intending this information to become an asset to be traded. If I was doing that, I would want a substantially lower price for the object I am buying. But I do not have that option.
Perhaps I can help you…there’s a little-known but remarkably reslilient anonymity technology available, that you might find useful. It’s known as “cash”. You take it to a place called a “store”, and can almost always obtain the exact same goods that are available online, at roughly the same price, but completely anonymously!
Economists often refer to something known as “revealed preferences”: what you actually prefer is much more reliably indicated by your actions than by your words. You claim to value the privacy of your personal data at a substantial fraction of the price of the goods you purchase–yet anonymity is apparently not even worth the price of a trip to the store to you. Forgive my skepticism…
Hal,
It’s worse than you say. Honest consumers who for the moment have no needs beyond the restrictions of DRM have no way of knowing whether or not they will have such needs in the future. This is especially true when some DRM schemes contain features that are not currently being used, but may be used in the future. Thus, consumers may have no complaint with DRM schemes as implemented today, but may have complaints with the same DRM scheme as implemented in the future. This is in large part true because those making use of DRM schemes architect them to allow denial of activities that most consumers would consider “fair use.”
Dan,
Your argument is specious. You are comparing apples and oranges.
When I purchase something from a company, I don’t consider my personal information part of my payment. It’s a piece of information they sometimes need to do business, but it is not part of the payment. However, when I buy something, I own that item that I purchased, and the right of first sale says that I can do just about anything with that item that I now own. When I purchase something, it is clearly an asset. When I provide personal information as required to make a purchase, however, I am not intending this information to become an asset to be traded. If I was doing that, I would want a substantially lower price for the object I am buying. But I do not have that option.
I don’t know if it’s realistic or not, but I prefer to think of myself as the data owner and the companies I deal with as stewards.
I wish it were possible to address this by providing customers with a legal and/or technological mechanism for cancelling the relationship and making the data stewarded by the company further inaccessable, especially when I put in a credit card number to buy something and then go back three years later and they still have the number on file. (I’m looking at *you*, Amazon!)
DRM anyone? MS Passport? OpenID?
Does anyone else see the issue this way or am I batty?
Dan Simon makes an interesting point. The analogy goes further, too. Consumers who are perfectly willing to follow DRM policies and restrictions, either because they are (gasp) honest or because they don’t happen to have uses that go beyond the restrictions, cannot credibly signal that they will follow the rules. Hence we have a market failure which may arguably become far more significant in the future, a failure to adequately produce intellectual property even if many people would be willing to stop pirating it.
I want to draw some parallels to openID. Currently we have hundreds of websites each asking you for your email, password, home address, phone number, town, zip, etc. and if you change your email or home address you will have to change maybe hundreds of places just to get them right.
In openID we have one central place where we log in and every website out there subscribes to this one login place. Now they do not get to see our login or password, but they do get to see if we present the right credentials.
How about doing the same for home addresses, phone numbers, etc. we have it all stored in one central place along with our openID, and the companies will have to ask from them to get them. This will work fine in theory, but in practice I think the companies will rebel and ask for too much every time.
The ones who only need email will ask for address also and your phone number will also be needed as there is a 0,001% chance that they might need to contact you. And if we reject them asking for it they will just block our ability to do service with them. This is always a bigger hassle for the consumer than the company. It huts them in the long run, but in the short run it is beneficial and currently all companies think only short term. They do not care if they screw their customers over in the long term. (Think outsourcing and call centres in India, which is a short term boost, but huts customer loyalty long term).
Joe has a good point — how do you put a value on privacy to the consumer? Most cases in which the consumer gives up privacy have little short-term cost, and an uncertain long-term cost which is probably just more junk mail.
The usual way to protect against uncertain future events of uncertain severity is to buy insurance. Many financial services companies are now making products that look rather like insurance against serious loss from privacy breaches (e.g. identity theft).
In the presence of such insurance, doesn’t it make sense for the consumer to trade privacy for lower cost, when money saved > insurance cost?
On the other hand, a savvy early-stage company might decide to self-insure vs. privacy breaches, knowing that in its early “good” behavior it would have to put very little money aside, because the risk would be very low; if later managers want to engage in more “evil” practices, they need to set ample reserves aside, and, if profit from “evil” < cost of reserves, then the company will stay “good.” “We offer insurance vs. privacy risks” would be a powerful signal, “we no longer offer insurance vs. privacy risks” would also be powerful, and may well drive away enough business to make only “good” behavior cost-effective.
Isn’t there another side to the story? That is, might the market failure you describe also exist because people can’t properly comprehend what “privacy” is to them? In the Nissenbaum sense, we’d want’ to contextualize the integrity of their privacy and then make people fully aware of how that integrity might break down in the future (sale of the company’s assets, business decision to sell assets, data breaches, insider mischief, etc.).
Of course, because I can’t think of a way to do this, the lawyer in me says to make these commitments ironclad contracts that are somewhat general (like open source licenses) whereby the parent company agrees to certain conditions when it receives the customer’s information and has to renegotiate for different conditions. I mentioned something like this at the Cloud symposium and I think it makes a lot of sense until we can come up with something more flexible.
Correction – I think the problem isn’t as bad as I’ve said because the patent database is open, so in principle someone could do a search and find out if there were in fact any patents left that affected that protocol. But in practice this is difficult and expensive and/or time consuming, so it’s still a problem, especially for, for example, small companies or free software projects.
What it really needs is either:
a) get rid of software patents (preferable)
or
b) have a system where a company could make an announcement that a given technology they had developed was intended to be patent free, and then if after a set time period nobody had stated that they hold a patent on it, the law would make any later patent claims on it invalid.
What happened to your principles, Ed? Sure, online users sometimes complain that they want to retain control over their private data even after they’ve handed it over to some company in return for some product or service. But as righteous defenders of the “Freedom to Tinker”, they know better than to try to act on those complaints.
For example, they could try to impose some kind of onerous DRM scheme on their data, to prevent their “customers”, the companies they’re doing business with, from using it the way they want. But we all know that that approach would fail miserably, and annoy the customers as well. They could also try a massive legal campaign to try to coerce their customers into limiting the way they use the data they bought with their products and services. Again, though, that would probably just push the companies to use more devious means, and/or legislative reforms, to free them to do what they want with the data, such as sharing it with their partners.
Instead, the users do exactly what you’ve been advocating for years: they sell their personal data with few strings attached, and thus get happy customers willing to give maximum value in return. Why on earth are you complaining? Is the shoe really that uncomfortable when it’s on the other foot?
It occurred to me recently that there is a similar problem in relation to companies’ (such as Microsoft) statements about interoperability. As long as the law still allows patents on software, there is no obvious way that a company can make a binding declaration that a particular format or protocol they have developed will remain free/open in the future. They might say ‘here are the patents we hold on this protocol, and to show our good faith we will let them lapse in the coming year’. But even if they do this, nobody can be sure they haven’t kept some up their sleeve, which they can bring out later when everyone is using that protocol.
Ed,
One interesting thing about your hypothetical counterexample is that it provides the consumer with a way to assign a value to their privacy (or at least a lower bound to its value). However, I don’t know of any companies currently implementing such a scheme, and in the absence of guidance of that sort, most people are completely clueless as to how they should assign a value to their privacy.
But Constance is right in noting that the problem is compounded, because not only do companies not provide their assessment of the value of your privacy, but they don’t even provide the option to undertake a transaction without providing that information (although in some cases, you might be able to lie in order to protect your information, such as with free website registrations and the like).
Even people who read a website’s privacy policy (or that of their credit card company, cell phone company, etc.) tend to expect the worst but hope for the best, trusting that a company will abide by their self-written policy – not so much because they really trust the company, but because they can’t obtain the service without acceding to the demands of the company.
In some ways it’s worse than Michael Donnelly says. In theory, a shareholder could bring suit against a company for not selling customers’ personal information (despite the existence of a privacy policy) if they could make a decent case that there was more money to be made for shareholders by doing so. And in practice, companies that go into bankruptcy can be ordered to sell customer information because it’s a marketable asset. The privacy policy, like most other contracts involving a bankrupt company, can be modified/abrogated by judicial order.
If you’re going to have effective signalling, you need effective sanctions for companies that modify or breach their original privacy policies in serious ways. Outside the US, this kind of behavior can lead to serious fines, jail time, even revocation of the privilege to do certain kinds of business. Inside the US, bupkis.
Logical, call me old-fashioned, but I do not believe that consumers should have to stoop to deception in order to protect their personal information. They should simply have the right to say “no” without suffering financial penalty or inconvenience. And as I pointed out, the same information is requested by some retailers even when I am making a cash purchase.
That’s a good method to show the variables, but don’t forget that they are, in fact, variable. As you mention in your original article, things change as companies grow. So a company may have even sold a bunch of product at P+R, but if the value of R changes, those old sales have unrealized value in their private data, just like stock waiting to be sold. It’ll be an interesting company, indeed, that can resist the call to cash in that stock.
I still think the consumer forces are too weak, even in that model. The average consumer is going to take P-R every time when presented with the same product for less money. The value of R is going to have to be so small to get any volume that it may never be worth the extra development effort to partition those orders and their data. And, as you say, the company also now bears the unwieldy burden of trying to convince a concerned (anyone who buys P+R) consumer that their data will indeed be handled gently.
The reason I bring up government regulation is because that is typically the next step when a market cannot self-regulate and begins to hurt consumers. We’re not at that pain point and, unfortunately, we may never be at that pain point because of general indifference and a lack of understanding from the consumer’s point of view.
@Constance… you may not be trying hard enough. Certainly there are practical problems in the US with being required to divulge more than necessary to utilities, grantors of credit, and medical/insurance providers. But most stores are not so tough. I usually don’t mind giving out my zip code (or simply providing a random nearby one). When asked for a phone number, I simply say it is unlisted and the clerk moves on. Cash is still king in most stores, and if I really feel I must go to a store that has intrusive data collection practices (that are usually tied to using a credit card or check), I simply take enough cash for the transaction and no questions are usually asked. And if they are, you are under no obligation to provide any (correct) information for a simple cash transaction.
Michael,
Here’s a hypothetical counterexample to your argument.
Suppose a user is deciding whether to do business with a company. The company charges customers a price P, and it monetizes each customer’s information to get revenue R.
Now the company can offer customers an alternative “private” version of its service, which charges a price of P+R, but promises to never reveal the customer’s information to anyone.
The company will be indifferent between the two versions, since each yields revenue P+R. Some customers will prefer the private version, so the company will improve customer satisfaction (and presumably profit) by offering both.
But the problem is that the company has no effective way to convince customers that the private version really will be private. And if the company can’t convince customers that their extra payment of R is really buying privacy, then customers won’t buy the private version.
There’s your market failure: a product that the company wants to sell, and the customer wants to buy, but which can’t actually be provided.
Note that government regulation plays no role at all in this story.
I think you’re overlooking a key part of the equation that is not in the immediate picture when you talk about privacy: what is the company’s purpose?
Every company exists primarily to make money. The company must also balance this goal with many other guidelines according to the ownership, such as public policy, legal risk, and what-have-you. But the bottom line is the goal of the company is to make money. Period. This is particularly true of companies that are publicly held, as their executives will find themselves facing some angry shareholders if they act against the interests of the company.
So if you pick that up and hold it against a privacy policy, why would a company NOT monetize anything it can? There are some real reasons, such as the law, negative consumer backlash causing a net loss from selling names, and probably a dozen more. What does not appear in the list is anything along the lines of “don’t be evil” or “let’s be fair”. A company trying to be good or fair is doing so solely because it believes such an image will create more money down the road. To do otherwise is counter to the reason the company exists.
This driving force is what causes so much self-regulation to break down when consumer forces aren’t enough to differentiate the good guys and bad guys. There are very few business models today that will make more money by being “good”. Such companies can make more money by selling or using private data (legally) than they lose from the consumer anger.
So until the government comes in with some guidelines to move “the line” back a bit, we’re left with the open market. And I don’t think consumers are going to penalize bad companies any time soon. Market forces are just not strong enough there.
I think you make a misstatement by saying “but they happily give that same information to companies likely to use and sell it.” I do not believe that is true, I for one certainly do not “happily give” that information away. I simply have no choice but to do so if I want to complete a transaction.
All that information is required to make a purchase online. Indeed, even many websites that offer their services/apps at no charge still require you to provide address, telephone number, etc. even though all that is truly required to access the service is an email address.
And many retail stores are moving to that model, as well. I have lost count of the number of times I’ve gone to a retail store in a mall, an auto maintenance store, a big box store, to make a purchase and even for cash, zip code and telephone number are required to complete the transaction.
The most egregious demand of personal info comes from every non-financial service or company that demands your social security number for “identification” purposes. Try declining to provide you social, as is your right. You are either denied service entirely or, in the case of utility companies, are levied a financial penalty in the form of a deposit for not providing the number.
If consumers were given the option not to provide this information for every little transaction, they would probably opt out. And by given the option I do not mean having to speak up and deny it, I mean goods and services providers should not even be asking for it, or should at ask first with the explicit statement that it is not required, they would just like it for marketing purposes.