December 25, 2024

Archives for 2009

Intractability of Financial Derivatives

A new result by Princeton computer scientists and economists shows a striking application of computer science theory to the field of financial derivative design. The paper is Computational Complexity and Information Asymmetry in Financial Products by Sanjeev Arora, Boaz Barak, Markus Brunnermeier, and Rong Ge. Although computation has long been used in the financial industry for program trading and “the thermodynamics of money”, this new paper applies an entirely different kind of computer science: Intractability Theory.

A financial derivative is a contract specifying a payoff calculated by some formula based on the yields or prices of a specific collection of underlying assets. Consider the securitization of debt: a CDO (collateralized debt obligation) is a security formed by packaging together hundreds of home mortgages. The CDO is supposedly safer than the individual mortgages, since it spreads the risk (not every mortgage is supposed to default at once). Furthermore, a CDO is usually divided into “senior tranches” which are guaranteed not to drop in value as long as the total defaults in the pool does not exceed some threshhold; and “junior tranches” that are supposed to bear all the risk.

Trading in derivatives brought down Lehman Brothers, AIG, and many other buyers, based on mistaken assumptions about the independence of the underlying asset prices; they underestimated the danger that many mortgages would all default at the same time. But the new paper shows that in addition to that kind of danger, risks can arise because a seller can deliberately construct a derivative with a booby trap hiding in plain sight.

It’s like encryption: it’s easy to construct an encrypted message (your browser does this all the time), but it’s hard to decrypt without knowing the key (we believe even the NSA doesn’t have the computational power to do it). Similarly, the new result shows that the seller can construct the CDO with a booby trap, but even Goldman Sachs won’t have enough computational power to analyze whether a trap is present.

The paper shows the example of a high-volume seller who builds 1000 CDOs from 1000 asset-classes of home mortages. Suppose the seller knows that a few of those asset classes are “lemons” that won’t pay off. The seller is supposed to randomly distribute the asset classes into the CDOs; this minimizes the risk for the buyer, because there’s only a small chance that any one CDO has more than a few lemons. But the seller can “tamper” with the CDOs by putting most of the lemons in just a few of the CDOs. This has an enormous effect on the senior tranches of those tampered CDOs.

In principle, an alert buyer can detect tampering even if he doesn’t know which asset classes are the lemons: he simply examines all 1000 CDOs and looks for a suspicious overrepresentation of some of the asset classes in some of the CDOs. What Arora et al. show is that is an NP-complete problem (“densest subgraph”). This problem is believed to be computationally intractable; thus, even the most alert buyer can’t have enough computational power to do the analysis.

Arora et al. show it’s even worse than that: even after the buyer has lost a lot of money (because enough mortgages defaulted to devalue his “senior tranche”), he can’t prove that that tampering occurred: he can’t prove that the distribution of lemons wasn’t random. This makes it hard to get recourse in court; it also makes it hard to regulate CDOs.

Intractability Theory forms the basis for several of the technologies discussed on Freedom-to-Tinker: cryptography, digital-rights management, watermarking, and others. Perhaps now financial policy is now another one.

Sidekick Users' Data Lost: Blame the Cloud?

Users of Sidekick mobile phones saw much of their data disappear last week due to engineering problems at a Microsoft data center. Sidekick devices lose the contents of their memory when they don’t have power (e.g. when the battery is being changed), so all data is transmitted to a data center for permanent storage — which turned out not to be so permanent.

(The latest news is that some of the data, perhaps most of it, may turn out to be recoverable.)

A common response to this story is that this kind of danger is inherent in “cloud” computing services, where you rely on some service provider to take care of your data. But this misses the point, I think. Preserving data is difficult, and individual users tend to do a mediocre job of it. Admit it: You have lost your own data at some point. I know I have lost some of mine. A big, professionally run data center is much less likely to lose your data than you are.

It’s worth noting, too, that many cloud services face lower risk of this sort of problem. My email, for example, lives in the cloud–the “official copy” is on a central server, and copies are downloaded frequently to my desktop and laptop computers. If the server were to go up in flames, along with all of the server backups, I would still be in good shape, because I would still have copies of everything on my desktop and laptop.

For my email and similar services, the biggest risk to data integrity is not that the server will disappear altogether, but that the server will misbehave in subtle ways, causing my stored data to be corrupted over time. Thanks to the automatic synchronization between the server and my two clients (desktop and laptop), bad data could be replicated silently into all copies. In principle, some of the damage could be repaired later, using the server’s backups, but that’s a best case scenario.

This risk, of buggy software corrupting data, has always been with us. The question is not whether problems will happen in the cloud — in any complex technology, trouble comes with the territory — but whether the cloud makes a problem worse.

PrivAds: Behavioral Advertising without Tracking

There’s an interesting new paper out of Stanford and NYU, about a system called “PrivAds” that tries to provide behavioral advertising on web sites, without having a central server gather detailed information about user behavior. If the paper’s approach turns out to work, it could have an important impact on the debate about online advertising and privacy.

Advertisers have obvious reasons to show you ads that match your interests. You can benefit too, if you see ads that are relevant to your needs, rather than ones you don’t care about. The problem, as I argued in my Congressional testimony, comes when sites track your activities, and build up detailed files on you, in order to do the targeting.

PrivAds tries to solve this problem by providing behavioral advertising without having any server track you. The idea is that your own browser will track you, and analyze your online activities to build a model of your interests, but your browser won’t reveal this information to anyone else. When a site wants to show you an interest-based ad, your browser will choose the ad from a portfolio of ads offered by the ad service.

The tricky part is how your browser can do all of this without incidentally leaking your activities to the server. For example, the ad agency needs to know how many times each ad was shown. How can you report this to the ad service without revealing which ads you saw? PrivAds offers a solution based on fancy cryptography, so that the ad agency can aggregate reports from many users, without being able to see the users’ individual reports. Similarly, every interaction between your browser and the outside must be engineered carefully so that behavioral advertising can occur but the browser doesn’t telegraph your actions.

It’s not clear at this point whether the PrivAds approach will work, in the sense of protecting privacy without reducing the effectiveness of ad targeting. It’s clear, though, that PrivAds is asking an important question.

If the PrivAds approach succeeds, demonstrating that behavioral advertising does not require tracking, this doesn’t mean that companies will stop wanting to track you — but it does mean that they won’t be able to use advertising as an excuse to track you.

Chilling and Warming Effects

For several years, the Chilling Effects Clearinghouse has cataloging the effects of legal threats on online expression and helping people to understand their rights. Amid all the chilling we continue to see, it’s welcome to see rays of sunshine when bloggers stand up to threats, helping to stop the cycle of threat-and-takedown.

The BoingBoing team did this the other day when they got a legal threat from Ralph Lauren’s lawyers over an advertisement they mocked on the BoingBoing blog for featuring a stick-thin model. The lawyers claimed copyright infringement, saying “PRL owns all right, title, and interest in the original images that appear in the Advertisements.” Other hosts pull content “expeditiously” when they receive these notices (as Google did when notified of the post on Photoshop Disasters), and most bloggers and posters don’t counter-notify, even though Chilling Effects offers a handy counter-notification form.

Not BoingBoing, they posted the letter (and the image again) along with copious mockery, including an offer to feed the obviously starved models, and other sources picked up on the fun. The image has now been seen by many more people than would have discovered it in BoingBoing’s archives, in a pattern the press has nicknamed the “Streisand Effect.”

We use the term “chilling effects” to describe indirect legal restraints, or self-censorship, because most cease-and-desist letters don’t go through the courts. The lawyers (and non-lawyers) sending them rely on the in terrorem effects of threatened legal action, and often succeed in silencing speech for the cost of an e-postage stamp.

Actions like BoingBoing’s use the court of public opinion to counter this squelching. They fight legalese with public outrage (in support of legal analysis), and at the same time, help other readers to understand they have similar rights. Further, they increase the “cost” of sending cease-and-desists, as they make potential claimants consider the publicity risks being made to look foolish, bullying, or worse.

For those curious about the underlying legalities here, the Copyright Act makes clear that fair use, including for the purposes of commentary, criticism, and news reporting, is not an infringement of copyright. See Chilling Effects’ fair use FAQ. Yet the DMCA notice-and-takedown procedure encourages ISPs to respond to complaints with takedown, not investigation and legal balancing. Providers like BoingBoing’s Priority Colo should also get credit for their willingness to back their users’ responses.

As a result of the attention, Ralph Lauren apologized for the image: “After further investigation, we have learned that we are responsible for the poor imaging and retouching that resulted in a very distorted image of a woman’s body. We have addressed the problem and going forward will take every precaution to ensure that the caliber of our artwork represents our brand appropriately.”

May the warming (and proper attention to the health of fashion models) continue!

[cross-posted at Chilling Effects]

Privacy as a Social Problem, Not a Technology Problem

Bob Blakley had an interesting post Monday, arguing that technologists tend to frame the privacy issue poorly. (I would add that many non-technologists use the same framing.) Here’s a sample:

That’s how privacy works; it’s not about secrecy, and it’s not about control: it’s about sociability. Privacy is a social good which we give to one another, not a social order in which we control one another.

Technologists hate this; social phenomena aren’t deterministic and programmers can’t write code to make them come out right. When technologists are faced with a social problem, they often respond by redefining the problem as a technical problem they think they can solve.

The privacy framing that’s going on in the technology industry today is this:

Social Frame: Privacy is a social problem; the solution is to ensure that people use sensitive personal information only in ways that are beneficial to the subject of the information.

BUT as technologists we can’t … control peoples’ behavior, so we can’t solve this problem. So instead let’s work on a problem that sounds similar:

Technology Frame: Privacy is a technology problem; since we can’t make people use sensitive personal information sociably, the solution is to ensure that people never see others’ sensitive personal information.

We technologists have tried to solve the privacy problem in this technology frame for about a decade now, and, not surprisingly (information wants to be free!) we have failed.

The technology frame isn’t the problem. Privacy is the problem. Society can and routinely does solve the privacy problem in the social frame, by getting the vast majority of people to behave sociably.

This is an excellent point, and one that technologists and policymakers would be wise to consider. Privacy depends, ultimately, on people and institutions showing a reasonable regard for the privacy interests of others.

Bob goes on to argue that technologies should be designed to help these social mechanisms work.

A sociable space is one in which people’s social and antisocial actions are exposed to scrutiny so that normal human social processes can work.

A space in which tagging a photograph publicizes not only the identities of the people in the photograph but also the identities of the person who took the photograph and the person who tagged the photograph is more sociable than a space in which the only identity revealed is that of the person in the photograph – because when the picture of Jimmy holding a martini washes up on the HR department’s desk, Jimmy will know that Johnny took it (at a private party) and Julie tagged him – and the conversations humans have developed over tens of thousands of years to handle these situations will take place.

Again, this is an excellent and underappreciated point. But we need to be careful how far we take it. If we go beyond Bob’s argument, and we say that good design of the kind he advocates can completely solve the online privacy problem, then we have gone too far.

Technology doesn’t just move old privacy problems online. It also creates new problems and exacerbates old ones. In the old days, Johnny and Julie might have taken a photo of Jimmy drinking at the office party, and snail-mailed the photo to HR. That would have been a pretty hostile act. Now, the same harm can arise from a small misunderstanding: Johnny and Julie might assume that HR is more tolerant, or that HR doesn’t watch Facebook; or they might not realize that a site allows HR to search for photos of Jimmy. A photo might be taken by Johnny and tagged by Julie, even though Johnny and Julie don’t know each other. All in all, the photo scenario is more likely to happen today than in the pre-Net age.

This is just one example of what James Grimmelmann calls Accidental Privacy Spills. Grimmelmann tells the story of a private email message that was forwarded and re-forwarded to thousands of people, not by malice but because many people made the seemingly harmless decision to forward it to a few friends. This would never have happened with a personal letter. (Personal letters are sometimes publicized against the wishes of the author, but that’s very rare and wouldn’t have happened in the case Grimmelmann describes.) As the cost of capturing, transmitting, storing, and searching photos and other digital information falls to near-zero, it’s only natural that more capturing, transmitting, storing, and searching of information will occur.

Good design is not the whole solution to our privacy problem. But design has the huge advantage that we can get started on it right away, without needing to reach some sweeping societal agreement about what the rules should be. If you’re designing a product, or deciding which product to use, you can support good privacy design today.