I wrote yesterday about a market failure relating to privacy, in which a startup company can’t convincingly commit to honoring its customers’ privacy later, after the company is successful. If companies can’t commit to honoring privacy, then customers won’t be willing to pay for privacy promises – and the market will undersupply privacy.
Today I want to consider how to attack this problem. What can be done to enable stronger privacy commitments?
I was skeptical of legal commitments because, even though a company might make a contractual promise to honor some privacy rules, customers won’t have the time or training to verify that the promise is enforceable and free of loopholes.
One way to attack this problem is to use standardized contracts. A trusted public organization might design a privacy contract that companies could sign. Then if a customer knew that a company had signed the standard contract, and if the customer trusted the organization that wrote the contract, the customer could be confident that the contract was strong.
But even if the contract is legally bulletproof, the company might still violate it. This risk is especially acute with a cash-strapped startup, and even more so if the startup might be located offshore. Many startups will have shallow pockets and little presence in the user’s locality, so they won’t be deterred much by potential breach-of-contract lawsuits. If the startup succeeds, it will eventually have enough at stake that it will have to keep the promises that its early self made. But if it fails or is on the ropes, it will be strongly tempted to try cheating.
How can we keep a startup from cheating? One approach is to raise the stakes by asking the startup to escrow money against the possibility of a violation – this requirement could be build into the contract.
Another approach is to have the actual data held by a third party with deeper pockets – the startup would provide the code that implements its service, but the code would run on equipment managed by the third party. Outsourcing of technical infrastructure is increasingly common already, so the only difference from existing practice would be to build a stronger wall between the data stored on the server and the company providing the code that implements the service.
From a technical standpoint, this wall might be very difficult to build, depending on what exactly the service is supposed to do. For some services the wall might turn out to be impossible to build – there are some gnarly technical issues here.
There’s no easy way out of the privacy commitment problem. But we can probably do more to attack it than we do today. Many people seem to have given up on privacy online, which is a real shame.
I don’t like the idea of adding more requirements to new businesses. Start-ups often have enough of a barrier to entry from established companies, and that lack of new competition can make established companies complacent, egotistical, and more likely to take advantage of their consumers.
Honestly, a healthy amount of competition would help, as long as there was oversight, whistle-blowers, and disclosure. That way if consumers are being abused, they know it, they have somewhere else to take their business, and dishonest companies can go under.
If the companies are too big to go under, we should be fighting them with Anti-Trust laws instead. Competition and visibility keeps companies honest.
I think this is a useful insight. For me, it helps illuminate why I trust the government less than I trust (say) Google, when it comes to something like keeping my health data secure. Governments have a particularly hard case of the commitment problem. In the UK, where we don’t have much in the way of a written constitution, it’s frequently said that ‘no government can bind its successor’. Although that’s not completely true (a UK government would have a hard time withdrawing from the European Convention on Human Rights) it reflects how officials think and act. Your medical data given today for research can be used tomorrow for law enforcement, and there’s nothing you can do to stop it. So a prudent citizen won’t give them anything in the first place.
Hal,
If I buy a CD and share it with friends or “associates,” I am breaking the law. But if a company acquires my personal information as part of a transaction and shares that personal transaction with other companies, there is not (in the US) any law that prohibits that. Your analogy does not hold water because no-one here is suggesting the end of copyright or that *all* data should be free.
The “freedom to tinker” means preservation of the rights of first sale, not the right to share at will.
However, you have a point. If companies are willing to totally give up copyright and patent protection (as you suggest), then I am at that point equally willing to stop worrying about privacy of my personal information. However, I would not advocate for either one, because I think both are equally destructive to the economy.
Seems to me that it would be fraud if a company didn’t honor its privacy contract. That would suggest, and if it doesn’t the law should make it so, that corporate veil does not hold the officers, directors and executives immune to personal lawsuit.
Frankly, from my point of view, violating a privacy contract, regardless of the harm done, or not done, should be a “special circumstances” capital offence.
Hal, meet Bruce Schneier’s essay on information exchange in the face of power disparities. When my tinkering can cause the CEO of the company whose stuff I’m tinkering with to be arrested on a regular basis, to lose house and health insurance, or to receive harassing phone calls to their personal number several times a day for the next several years, then maybe, just maybe that symmetry might be apt.
As I commented in the other thread, I think Dan Simon’s analogy is spot on. Shouldn’t businesses have the “Freedom to Tinker” with their customers’ data? Shouldn’t they be free to shift it to other institutions just like people arguably should be free to shift around the information products they have downloaded? Or, from the opposite direction, shouldn’t both businesses and consumers have the “Freedom to Promise Not to Tinker” and be bound by such promises?
Trusted Computing has been offered as a possible technical solution to both problems. While it has its limitations, it could allow businesses to voluntarily and verifiably adopt certain limitations and restrictions on what they do with customer data. For example, it could block wholesale copying of the entire database and allow only individual records to be viewed and processed manually. This might not fully restrict copying but it could make it much more difficult and expensive, similar to how TC could implement DRM restrictions.
The purpose of the third party is to put the burden of compliance on a large, deep-pocketed company that makes an easily-located and fat target for lawsuits.
Technological measures seem difficult. DRM doesn’t work, so it would have to instead be based on aliases: people would need to be able to get alias mailing addresses and other such contact info, as well as bank accounts or credit cards, that forwarded in some way to their real ones but could be divested and new ones made quickly and easily.
As it stands, it’s easy to get e-mail addresses that are temporary or forward to your main one, and whose plug can be pulled if it starts receiving spam or simply once you’re done with it. The ability to get snail-mail addresses cheaply or for free that behave the same way would be useful — a kind of auto-forwarding PO box whose mail gets sent to your actual home address until you tell the post office web site to cancel it. Revealing the box’s address wouldn’t reveal your street address to potential stalkers or advertisers, and if it began to get junk mail you could pull the plug, or just pull it as soon as you had received what you wanted shipped.
Temporary phone numbers and temporary bank/credit card accounts would also be useful. There have been some moves made in the direction of supplying the latter, aimed at internet use, with a limited amount of money transferred to the “card” that is the most that misuse of the number can cost you, but they are still not ubiquitously available enough without passing a credit check and without jumping through hoops such as physically presenting yourself at a bank branch to sign documents for each new account. A way to just click the mouse a few times at your bank’s website after logging in and presto, the “credit card number” xyz395r8 has fifty bucks apparently on it, which if used will actually come out of your checking account, and since it’s not actually a loan you don’t need to be a good credit risk to do it. (If the transaction bounces, you’re possibly subjected by the bank to a fee similar to if a check bounces, and the ecommerce vendor presumably won’t ship your order.)
But besides legal and technical measures, there is the possibility of market measures.
The organization hypothetically entrusted with the whole standard-contracts thing could, furthermore, provide compliant sites with a “seal of approval” thing. This would be an image link that fetches from the standard-contracts company’s site and is encoded to refer to a particular business. The server returns an image that embeds that company’s name and logo and a rating for how well it has been complying, weighted to consider recent behavior more significant than years-ago behavior.
The image, furthermore, is a link to the standards-contracts company’s site, displaying a page with the same info (and more detail, possibly including user-submitted comments).
Sites that change the URLs to point to a different company’s that has a better rating will end up showing images with the different company’s logo and pointing to pages with the different company’s logo.
Sneakier would be to save a copy of the image when the rating is good and display this copy from their own servers. But it would either not link, link to the wrong company’s page, or link to the right company’s page at the privacy company’s site where the real rating would be displayed.
The images would have a big fat CLICK TO VERIFY on them that would make sure users always knew how to check against this sort of fakery. Images lacking the CLICK TO VERIFY or otherwise unusual-looking would become suspicious-looking once the undoctored variety became ubiquitous enough.
Essentially, we’re talking TrustE with teeth here because the seal actually means a pro-privacy contract was signed, rather than merely that the site abides by its privacy policy du jour, combined with a Creative Commons like enterprise.
Market forces might then punish companies that failed to abide by the contract even in the cases where lawsuits are of limited effectiveness.
The UK’s Data Protection Act only allows data collected under a set of privacy and use rules to be used under that commitment. Changing the purpose by altering the privacy policy such that personal data can now be passed on forces the data collector to purge and start again under the new terms.
A third party would shift the problem in another way: any third party operating data-collection infrastructure for a bunch of startups would be a high-value target, for either external attack or internal subversion. You’d have to have some awfully big bonds in place, and an enforcement mechanism that would be sure to work in practice rather than just in theory. Otherwise the guarantor offers pretty much the reverse of a guarantee.
have you seen the World Wide Web Without Walls (W5) stuff out of MIT? i don’t know if it’s a feasible solution, as i think any feasible solution goes beyond just engineering a platform, but it’s an interesting idea. here’s the abstract:
“Today’s Web depends on a particular pact between sites and users: sites invest capital and labor to create and market a set of features, and users gain access to these features by giving up control of their data (photos, personal information, creative musings, etc.). This paper imagines a very different Web ecosystem, in which users retain control of their data and developers can justify their existence without hoarding that data. ”
you can get the paper here: http://nms.csail.mit.edu/papers/index.php?detail=172
Ashley, you are right that we do not have a policeman at every elbow. But the Internet is about scale. Someone can steal/sell the information of thousands or millions at a time. These are the kind of events we should attempt to prevent through technical means. I am willing to accept the risk of 1 in a million of losing my private data, just like I am willing to accept the risk of 1 in a million of getting shot on the street. If either of those risks grows to 1 in 10, then I expect a significant change in the system. Maybe I’ll get a private guard to protect me on the street; what can I do on the Internet to protect my identity? (Not using the Internet is not an answer to this question.)
Mihai, we don’t keep a police man at every elbow to keep people from killing each other. We have effective infrastructure and policy to react to it so that it keeps it from happening. Of course there would be a few who would test the new infrastructure to protect privacy, and if we reacted in the right way it would keep others from following suit.
While a third party would shift the problem, it wouldn’t ‘merely’ shift the problem. We use third parties for credit score management, ssl, auditors (as said above). While ssl is a technical issue it is still based on public trust. When a company writes it model based on trust, it would still have the option to sell information but not without effectively destroying the company.
Anybody want to isomorph this over to Enron?
A couple of thoughts:
– Introducing a third party only shifts the privacy problem around.
– Using only the legal system to protect privacy (i.e., privacy = civil right, as advocated by swhx7 above) does not work when both services and infrastructure are outsourced. In particular, punitive laws only kick in after my privacy is gone. Yes, the startup might lose some money or even disappear, and yes, I might get some money, but my private data is no longer private.
Nobody has come close to even tackling the privacy problem from a technical perspective. Why not? Recent technologies amplified the privacy problem, so it might be possible to have new technologies to enhance privacy protections.
How about a contractually-enforced auto-expiry? ie. “You agree to purge all my info from your database no later than X timeperiod from my last visit or upon my explicit account deletion” This is then enforced by auditors or the like. Auditors may be a key here – it’s how banks and shareholders make sure the company’s not cooking the books, why not have some version of them to make sure they’re not selling the data?
Ed, I think you’re mis-conceptualizing the problem. You go off on the wrong track (albeit with good analysis) here at the start: “If companies can’t commit to honoring privacy, then customers won’t be willing to pay for privacy promises — and the market will undersupply privacy.”
This is an appropriate way to state the problem only if we assume in the first place that privacy is something to be traded in a market. Well, for some things this is appropriate – physical commodities, certainly, and probably most well-defined services. Other things, not so much.
COnsider health, for example. The US exemplifies the market approach to this, and the notorious result is that rich folks are as healthy as they want to be, the middle class can’t afford an illness, and the poor may be lucky enough to survive illness or injury with poorly-subsidized emergency rooms.
Other countries place health in more of a non-commodity category and arrange that it won’t depend so much on the individual’s bargaining power.
The alternative your post overlooks is the possible view that privacy should not be treated as a commodity available only in proportion to wealth, but instead as a civil right.
This would remove the imperative for companies to sell user data in order to compete, because the law would prevent any advantage from selling out the users. It would place the individual in a position of effective ownership of his/her data, and mostly resolve the whole problem. Businesses could still make money in less exploitive ways.
You didn’t take my bait, Ed. :^) Would it be a “real shame” if many people gave up on intellectual property protection online? Do you really see nothing the least bit odd about stridently campaigning to prevent copyright owners from controlling the way the intellectual property they distribute is used and redistributed, and at the same time stridently campaigning to enable individuals to control the way the personal information they distribute is used and redistributed?
There are, to be sure, important distinctions between these categories of information. But there are also some striking similarities. For example, both copyright owners and privacy advocates tend to view information as “owned” in some fundamental sense, whereas their ownership is in fact a mere artifact of current public policy, subject to change with the winds of politics and law. Both categories lead to conflict between the zealous defenders of maximal “property rights” and the advocates of maximal “fair use”. Both are subject to the “cat’s out of the bag” problem, exacerbated by modern technology that ensures that once available, information can be copied with amazing speed and stored in innumerable places at minimal cost. And both recast what is at heart a public policy question–how best to serve society as a whole–as a fight between two classes of (often financially) interested party, one trying to hoard control over information by claiming “rightful ownership” of it, and the other trying to wrest it from that ownership in the name of “freedom”.
Now, are you still completely comfortable loudly taking diametrically opposite sides of that fight, depending on the category of information involved?