January 16, 2025

Further adventures in personal credit

In our last installment, I described how one of the mortgage vendors who I was considering for the loan for my new home failed to trigger the credit alerting mechanism (Debix) to which I was signed up. Since then, I’ve learned several interesting facts. First, the way that Debix operates is that they insert a line into your credit reports which says, in effect, “you, the reader of this line, are required to call this 1-800 telephone number, prior to granting credit based on what you see in this report.” That 800-number finds its way to Debix, where a robot answers the phone and asks the human who called it for their name/organization, and the purpose of the request. Then, the Debix robot calls up their customer and asks permission to authorize the request, playing back the recordings made earlier.

The only thing that makes this “mandatory” is a recent law (sorry, I don’t have the citation handy) which specifies how lenders and such are required to act when they see one of these alerts in a credit report. The mechanism, aside from legal requirements, is otherwise used at the discretion of a human loan officer. This leads me to wonder whether or not the mechanism works when there isn’t a human loan officer involved. I may just need to head over to some big box store and purchase myself something with an in-store instant-approval credit card, just to see what happens. (With my new house will inevitably come a number of non-trivial expenses, and oh what great savings I can get with those insta-credit cards!)

So does the mechanism work? Yesterday morning, as I was getting into the car to go to work, my cell phone rang with an 800-number as the caller-ID. “Hello?” It was the Debix robot, asking for my approval. Debix played a recording of an apparently puzzled loan officer who identified herself as being from the bank that, indeed, I’m using for my loan. Well that’s good. Could the loan officer have been lying? Unlikely. An identity thief isn’t really the one who gets to see the 800-number. It’s the loan officer of the bank that the identity thief is trying to defraud who then makes the call. That means our prospective thief would need to guess the proper bank to use that would fool me into giving my okay. Given the number of choices, the odds of the thief nailing it on the first try are pretty low. (Unless our prospective thief is clever enough to have identified a bank that’s too lazy to follow the proper procedure and call the 800-number; more on this below).

A side-effect of my last post was that it got noticed by some people inside Debix and I ended up spending some quality time with one of their people on the telephone.  They were quite interested in my experiences.  They also told me, assuming everything is working right, that there will be some additional authentication hoops that the lender is (legally) mandated to jump through between now and when they actually write out the big check. Our closing date is next week, Friday, so I should have one more post when it’s all over to describe how all of that worked in the end.

Further reading: The New York Times recently had an article (“In ID Theft, Some Victims See an Opportunity“, November 16, 2007) discussing Debix and several other companies competing in the same market. Here’s an interesting quote:

Among its peers, LifeLock has attracted the most attention — much of it negative. In radio and television ads, Todd Davis, chief executive of LifeLock, gives out his Social Security number to demonstrate his faith in the service. As a result, he has been hit with repeated identity theft attacks, including one successful effort this summer in which a check-cashing firm gave out a $500 loan to a Texas fraudster without ever checking Mr. Davis’s credit report.

Sure enough, if you go to LifeLock’s home page, you see Mr. Davis’s social security number, right up front. And, unsurprisingly, he fell victim because, indeed, fraudsters identified a loan organization that didn’t follow the (legally) mandated protocol.

How do we solve the problem? Legally mandated protocols need to become technically mandatory protocols. The sort of credit alerts placed by Debix, LifeLock, and others need to be more than just a line in the consumer’s credit file. Instead, the big-3 credit bureaus need to be (legally) required not to divulge anything beyond the credit-protection vendor’s 800-number without the explicit (technical) permission of the vendor (on behalf of the user). Doing this properly would require the credit bureaus to standardize and implement a suitable Internet-based API with all the right sorts of crypto authentication and so forth – nothing technically difficult about that. Legally, I’d imagine they’d put up more of a fight, since they may not like these startups getting in the way of their business.

The place where the technical difficulty would ramp up is that the instant-credit-offering big-box stores would want to automate their side of the phone robot conversation. That would then require all these little startups to standardize their own APIs, which seems difficult when they’re all still busily inventing their own business models.

(Sidebar: I set up this Debix thing months ago. Then I get a phone call, out of the blue, that asked me to remember my PIN. Momentary panic: what PIN did I use? Same as the four-digit one I use for my bank ATM? Same as the six-digit one I uses for my investment broker? Same as the four-digit one used by my preferred airline’s frequent flyer web site which I can’t seem to change? Anyway, I guessed right. I’d love to know how many people forget.)

Radiohead's Low Price Might Mean Higher Profit

Radiohead’s name-your-own-price sale of its new In Rainbows album has generated lots of commentary, especially since comscore released data claiming that 62% of customers set their price at zero, with the remaining 38% setting an average price of $6, which comes to an average price of $2.28 per customer. (There are reasons to question these numbers, but let’s take them as roughly accurate for the sake of argument.)

Bill Rosenblatt bemoaned the low price, calling it a race to the bottom. Tim Lee responded by pointing out that Rosenblatt’s “race to the bottom” is just another name for price competition, which is hardly a sign of an unhealthy market. The music market is more competitive than before, and production costs are lower, so naturally prices will go down.

But there’s another basic economic point missing in this debate: Lower average price does not imply lower profit. Radiohead may well be making more money because the price is lower.

To see why this might be true, imagine that there are 10 customers willing to pay $10 for your album, 100 customers willing to pay only $2, and 1000 customers who will only listen if the price is zero. (For simplicity assume the cost of producing an extra copy is zero.) If you price the album at $10, you get ten buyers and make $100. If you price it at $2, you get 110 buyers and make $220. Lowering the price makes you more money.

Or you can ask each customer to name their own price, with a minimum of $2. If all customers pay their own valuation, then you get $10 from 10 customers and $2 from 100 customers, for a total of $300. You get perfect price discrimination – each customer pays his own valuation – which extracts the maximum possible revenue from these 110 customers.

Of course, in real life some customers who value the album at $10 will name a price of $2, so your revenue won’t reach the full $300. But if even one customer pays more than $2, you’re still better off than you’d be with any fixed price. Your price discrimination is imperfect, but it’s still better than not discriminating at all.

Now imagine that you can extract some nonzero amount of revenue from the customers who aren’t willing to pay at all, perhaps because because listening will make them more likely to buy your next album or recommend it to their friends. If you keep the name-your-own-price deal, and remove the $2 minimum, then you’ll capture this value because customers can name a price of zero. Some of the $10-value or $2-value people might also name a price of zero, but if not too many do so you might be better off removing the minimum and capturing some value from every customer.

If customers are honest about their valuation, this last scenario is the most profitable – you make $300 immediately plus the indirect benefit from the zero-price listeners. Some pundits will be shocked and saddened that your revenue is only 27 cents per customer, and 90% of your customers paid nothing at all. But you won’t care – you’ll be too busy counting your money.

Finally, note that none of this analysis depends on any assumptions about customers’ infringement options. Even if it were physically impossible to make infringing copies of the album, the analysis would still hold because it depends only on how badly customers want to hear your music and how likely they are to name a price close to their true valuation. Indeed, factoring in the possibility of infringement only strengthens the argument for lowering the average price.

By all accounts, Radiohead’s album is a musical and financial success. Sure, it’s a gimmick, but it could very well be a smart pricing strategy.

How Can Government Improve Cyber-Security?

Wednesday was the kickoff meeting of the Commission on Cyber Security for the 44th Presidency, of which I am a member. The commissionhas thirty-four members and has four co-chairs: Congressmen Jim Langevin and Michael McCaul, Admiral Bobby Inman, and Scott Charney. It was organized by the Center for Strategic and International Studies, a national security think tank in Washington. Our goal is to provide advice about cyber-security policy to the next presidential administration. Eventually we’ll produce a report with our findings and recommendations.

I won’t presume to speak for my fellow members, and it’s way too early to predict the contents of our final report. But the meeting got me thinking about what government can do to improve cyber-security. I’ll offer a few thoughts here.

One of the biggest challenges comes from the broad and porous border between government systems and private systems. Not only are government computers networked pervasively to privately-owner computers; but government relies heavily on off-the-shelf technologies whose characteristics are shaped by the market choices of private parties. While it’s important to better protect the more isolated, high-security government systems, real progress elsewhere will depend on ordinary technologies getting more secure.

Ordinary technologies are designed by the market, and the market is big and very hard to budge. I’ve written before about the market failures that cause security to be under-provided. The market, subject to these failures, controls what happens in private systems, and in practice also in ordinary government systems.

To put it another way, although our national cybersecurity strategy might be announced in Washington, our national cybersecurity practice will be defined in the average Silicon Valley cubicle. It’s hard to see what government can do to affect what happens in that cubicle. Indeed, I’d judge our policy as a success if we have any positive impact, no matter how small, in the cubicle.

I see three basic strategies for doing this. First, government can be a cheerleader, exhorting people to improve security, convening meetings to discuss and publicize best practices, and so on. This is cheap and easy, won’t do any harm, and might help a bit at the margin. Second, government can use its purchasing power. In practice this means deliberately overpaying for security, to boost demand for higher-security products. This might be expensive, and its effects will be limited because the majority of buyers will still be happy to pay less for less secure systems. Third, government can invest in human capital, trying to improve education in computer technology generally and computer security specifically, and supporting programs that train researchers and practitioners. This last strategy is slow but I’m convinced it can be effective.

I’m looking forward to working through these problems with my fellow commission members. And I’m eager to hear what you all think.

Comcast Podcast

Recently I took part in a Technology Liberation Front podcast about the Comcast controversy, with Adam Thierer, Jerry Brito, Richard Bennett, and James L. Gattuso. There’s now a (slightly edited) transcript online.

Economics of Eavesdropping For Pay

Following up on Andrew’s post about eavesdropping as a profit center for telecom companies, let’s take a quick look at the economics of eavesdropping for money. We’ll assume for the sake of argument that (1) telecom (i.e. transporting bits) is a commodity so competition forces providers to sell it essentially at cost, (2) the government wants to engage in certain eavesdropping and/or data mining that requires cooperation from telecom providers, (3) cooperation is optional for each provider, and (4) the government is willing to pay providers to cooperate.

A few caveats are in order. First, we’re not talking about situations, such as traditional law enforcement eavesdropping pursuant to a warrant, where the provider is compelled to cooperate. Providers will cooperate in those situations, as they should. We’re only talking about additional eavesdropping where the providers can choose whether to cooperate. Second, we don’t care whether the government pays for cooperation or threatens retaliation for non-cooperation – either way the provider ends up with more money if it cooperates. Finally, we’re assuming that the hypothetical surveillance or data mining program, and the providers’ participation in it, is lawful; otherwise the law will (eventually) stop it. With those caveats out of the way, let the analysis begin.

Suppose a provider charges each customer an amount P for telecom service. The provider makes minimal profit at price P, because by assumption telecom is a commodity. The government offers to pay the provider an amount E per customer if the provider allows surveillance. The provider has two choices: accept the payment and offer service with surveillance at a price of P-E, or refuse the payment and offer reduced-surveillance service at price P. A rational provider will do whatever it thinks its customers prefer: Would typical customers rather save E, or would they rather avoid surveillance?

In this scenario, surveillance isn’t actually a profit center for the provider – the payment, if accepted, gets passed on to customers as a price discount. The provider is just an intermediary; the customers are actually deciding.

But of course the government won’t allow each customer to make an individual decision whether to allow surveillance – then the bad guys could pay extra to avoid being watched. If enough customers prefer for whatever reason to avoid surveillance (at a cost of E), then some provider will emerge to serve them. So the government will have to set E large enough that the number of customers who would refuse the payment is not large enough to support even one provider. This implies a decent-sized value for E.

But there’s another possibility. Suppose a provider claims to be refusing the payment, but secretly accepts the payment and allows surveillance of its customers. If customers fall for the lie, then the provider can change P while pocketing the government payment E. Now surveillance is a profit center for the provider, as long as customers don’t catch on.

If customers know that producers might be lying, savvy customers will discount a producer’s claim to be refusing the payments. So the premium customers are willing to pay for (claims of) avoiding surveillance will be smaller, and government can buy more surveillance more cheaply.

The incentives here get pretty interesting. Government benefits by undermining providers’ credibility, as that lowers the price government has to pay for surveillance. Providers who are cooperating with the government want to undermine their fellow providers’ credibility, thereby making customers less likely to buy from surveillance-resisting providers. Providers who claim, truthfully or not, to be be refusing surveillance want to pick fights with the government, making it look less likely that they’re cooperating with the government on surveillance.

If government wants to use surveillance, why doesn’t it require providers to cooperate? That’s a political question that deserves a post of its own.