November 25, 2024

A Visit From Bill Gates

Bill Gates visited Princeton on Friday, accompanied by his father, a prominent Seattle lawyer who now heads the Gates Foundation, and by Kevin Schofield, a Microsoft exec (and Princeton alumnus) who helped to plan the university visits.

After speaking briefly with Shirley Tilghman, Princeton’s president, Mr. Gates spent an hour in a roundtable discussion with a smallish group of computer science faculty. I was lucky enough to be one of them. The meeting was closed, so I won’t give you a detailed play-by-play. Essentially, we told him about what is happening in computer science at Princeton; he asked questions and conversation ensued. We talked mostly about computer science education. Along the way I gave a quick description of the new infotech policy course that will debut in the spring. Overall, it was a good, high-energy discussion, and Mr. Gates showed a real passion for computer science education.

After the roundtable, he headed off to Richardson Auditorium for a semi-public lecture and Q&A session. (I say semi-public because there wasn’t space for everybody who wanted to get in; tickets were allocated to students by lottery.) The instructions that came with my ticket made it seem like security in the auditorium would be very tight (no backpacks, etc.), but in fact the security measures in place were quite unobtrusive. An untrained eye might not have noticed anything different from an ordinary event. I showed up for the lecture at the last minute, coming straight from the faculty roundtable, so I one of the worst seats in the whole place. (Not that I’m complaining – I certainly wouldn’t have traded away my seat in the faculty roundtable for a better seat at the lecture!)

After an introduction from Shirley Tilghman, Mr. Gates took the stage. He stood alone on the stage and talked for a half-hour or so. His presentation was punctuated by two videos. The first showed a bunch of recent Princeton alums who work at Microsoft talking about life at Microsoft in a semi-serious, semi-humorous way. (The highlight was seeing Corey in a toga.) The second video was a five-minute movie in which Mr. Gates finds himself in the world of Napoleon Dynamite. It co-stars Jon Heder, who played Napoleon in the movie. I haven’t seen the original movie but I’m told that many of the lines and gags in the video come from the movie. People who know the original movie seem to have found the video funny.

The theme of the lecture was the seamless coolness of the future computing environment. It was heavy on promotion and demonstrations of Microsoft products.

The Q&A was pretty interesting. He was asked how to reconcile his current cheerleading for C.S. education with his own history of dropping out of college. He had a funny and thoughful answer. I assume he’s had plenty of chances to hone his answer to that question.

A student asked him a question about DRM. His answer was fairly general, talking about the importance of both consumer flexibility and revenue for creators. He went on to say some harsh things about Blu-Ray DRM, saying that the system over-restricted consumers’ use and that its content-industry backers were making a mistake by pushing for it.

(At this point I had to leave due to a previous commitment, so from here on I’m relying on reports from people who were there.)

Another student asked him about intellectual property, suggesting that Microsoft was both a beneficiary and a victim of a strong patent system. Mr. Gates said that the patent system is basically sound but could benefit from some tweaking. He didn’t elaborate, but I assume he was referring to patent reform suggestions Microsoft has made previously.

After the Q&A, Mr. Gates accepted the “Crystal Tiger” award from a student group. Then he left for his next university visit, reportedly at Howard University.

Tax Breaks for Security Tools

Congress may be considering offering tax breaks to companies that deploy cybersecurity tools, according to an Anne Broache story at news.com. This might be a good idea, depending on how it’s done.

I’ve written before about the economics of cybersecurity. A user’s investment in security protects the user himself; and he has an incentive to pay for the efficient level of protection for himself. But each user’s security choices also affect others. If Alice’s computer is compromised, it can be used as a springboard for attacking Bob’s computer, so Alice’s decisions affect Bob’s security. Alice has little or no incentive to invest in protecting Bob. This kind of externality is common and leads to underinvestment in security.

Public policy can try to fix this by adjusting incentives in the right direction. A good policy will boost incentives to deploy the kinds of security measures that tend to protect others. Protecting oneself is good, but there is already an adequate incentive to do that; what we want is a bigger incentive to protect others. (To the extent that the same steps tend to protect both oneself and others, it makes sense to boost incentives for those steps too.)

A program along these lines would presumably give tax breaks to people and organizations that use networked computers in a properly secure way. In an ideal world, breaks would be given to those who do well in managing their systems to protect others. In practice, of course, we can’t afford to do a fancy security evaluation on each taxpayer to see whether he deserves a tax break, so we would instead give the break to those who meet some formalized criteria that serve as a proxy for good security. Designing these criteria so that they correlate well with the right kind of security, and so that they can’t be gamed, is the toughest part of designing the program. As Bruce Schneier says, the devil is in the details.

Another approach, which may be what Rep. Lundgren is trying to suggest in the original story, is to give tax breaks to companies that develop security technologies. A program like this might just be corporate welfare, or it might be designed to have a useful public purpose. To be useful, it would have to lead to lower prices for the right kinds of security products, or better performance at the same price. Whether it would succeed at this depends again on the details of how the program is designed.

If the goal is to foster more capable security products in the long run, there is of course another approach: government could invest in basic research in cybersecurity, or at least it could reverse the current disinvestment.

Virtual Worlds: Only a Game?

I wrote yesterday about virtual worlds, and the inevitability of government intervention in them. One objection to government intervention is that virtual worlds are only games; and it doesn’t make sense for government to intervene in games.

Indeed, many members of virtual worlds want the worlds to be games that operate at some remove from the real world. Games are more fun, they say, when what happens in the game doesn’t have real-world consequences. This was a common topic of discussion at State of Play.

The crux of this issue is the status of the in-world (i.e., in the virtual world) economy. Players can accumulate in-world stuff, including in-world currency, and they can trade in-world stuff for in-world currency. (A world might be designed without an identified currency, but it’s fairly certain that one in-world commodity would emerge as a consensus currency anyway.) Is in-world money just Monopoly money, or is it in some sense real money?

The only sensible answer is that it’s real money if it’s readily exchangeable for real-world currency. If you can trade in-world gold pieces for U.S. dollars (or Euros, etc.), and vice versa, then in-world gold is real money, and the in-world economy is a real economy.

If the world-designer wants to keep the world’s economy from becoming real, then, the designer must stop members from exchanging in-world currency for real currency. And this seems pretty much impossible, because there is no way to stop players from making side payments in the real world. Suppose Alice has gold pieces and Bob has dollars, and they want to trade. Bob transfers the dollars to Alice via a real-world channel (perhaps PayPal); virtual Alice gives virtual Bob the gold pieces. In-world, all that happens is a gift of gold from Alice to Bob. The dollar transfer isn’t visible to the world’s management. The world-designer can ban gifts of gold, but Alice and Bob can work around that ban by having Alice “lose” the gold in a private place where Bob will find it, or by cooking up a sham transaction where Alice buys a virtual toothpick from Bob at an inflated price.

Experience seems to show that any sufficiently popular in-world currency will become exchangeable for real money, whether the world-designer likes it or not.

There’s a useful lesson here about the limitations of code as a law-enforcement mechanism. One might think that code is law in a virtual world, in the sense that the world-designer writes the software code that defines what is possible in the world. It would be hard to think of a situation where code had more power to control behavior than in a virtual world. And yet the code can’t separate the virtual world from the real world. The reason it fails to do so is that the code doesn’t define the whole domain of human action; and people can defeat the code’s would-be restrictions by acting outside the code’s domain of control.

Once a virtual world gets big enough, and people value in-world stuff highly enough, it can no longer be just a game. The virtual world will touch the real world, along a sort of border through which money and communication flow.

Virtual World, Meet Terrestrial Government

Something remarkable is happening in virtual worlds. These are online virtual “spaces” where you can play a virtual character, and interact with countless other characters in a rich environment. It sounds like a harmless game, but there’s more to it than that. Much more.

When you put so many people into a place where they can talk to each other, where there are scarce but desirable objects, where they can create new “things” and share them, civilization grows. Complex social structures appear. Governance emerges. A sophisticated economy blooms. All of these things are happening in virtual worlds.

Consider the economy of Norrath, the virtual world of Sony Online Entertainment’s EverQuest service. Norrath has a currency, which trades on exchange markets against the U.S. dollar. So if you run a profitable business in Norrath, you can trade your Norrath profits for real dollars, and then use the dollars to pay your rent here in the terrestrial world. Indeed, a growing number of people are making their livings in virtual worlds. Some are barely paying their earth rent; but some are doing very well indeed. In 2003, Norrath was reportedly the 79th richest country in the world, as measured by GDP. Richer than Bulgaria.

(Want to try out a virtual world? SecondLife is a smaller but interesting world that offers free membership. They even have a promotional video made by members.)

Virtual worlds have businesses. They have stock markets where you can buy stock in virtual corporations. They have banks. People have jobs. And none of this is regulated by any terrestrial government.

This can’t last.

Last weekend at the State of Play conference, the “great debate” was over whether virtual worlds should be subject to terrestrial laws, or whether they are private domains that should determine their own laws. But regardless of whether terrestrial regulators should step in, they certainly will. Stock market regulators will object to the trading of virtual stocks worth real money. Employment regulators will object to the unconstrained labor markets, where people are paid virtual currency redeemable for dollars, in exchange for doing tasks specified by an employer. Banking regulators will object to unlicensed virtual banks that hold currency of significant value. Law enforcement will discover or suspect that virtual worlds are being used to launder money. And tax authorities will discover that things are being bought and sold, income is being earned, and wealth is being accumulated, all without taxation.

When terrestrial governments notice this, and decide to step in, things will get mighty interesting. If I ran a virtual world, or if I were a rich or powerful resident of one, I would start planning for this eventuality, right away.

Cost Tradeoffs of P2P

On Thursday, I jumped in to a bloggic discussion of the tradeoffs between centrally-controlled and peer-to-peer design strategies in distributed systems. (See posts by Randy Picker (with comments from Tim Wu and others), Lior Strahilevitz, me, and Randy Picker again.)

We’ve agreed, I think, that large-scale online services will be designed as distributed systems, and the basic design choice is between a centrally-controlled design, where most of the work is done by machines owned by a single entity, and a peer-to-peer design, where most of the work is done by end users’ machines. Google is a typical centrally-controlled design. BitTorrent is a typical P2P design.

The question in play at this point is when the P2P design strategy has a legitimate justification. Which justifications are “legitimate”? This is a deep question in general, but for our purposes it’s enough to say that improving technical or economic efficiency is a legitimate justification, but frustrating enforcement of copyright is not. Actions that have legitimate justifications may also have harmful side-effects. For now I’ll leave aside the question of how to account for such side-effects, focusing instead on the more basic question of when there is a legitimate justification at all.

Which design is more efficient? Compared to central control, P2P has both disadvantages and advantages. The main disadvantage is that in a P2P design, the computers participating in the system are owned by people who have differing incentives, so they cannot necessarily be trusted to work toward the common good of the system. For example, users may disconnect their machines when they’re not using the system, or they may “leech” off the system by using the services of others but refusing to provide services. It’s generally harder to design a protocol when you don’t trust the participants to play by the protocol’s rules.

On the other hand, P2P designs have three main efficiency advantages. First, they use cheaper resources. Users pay about the same price per unit of computing and storage as a central provider would pay. But the users’ machines a sunk cost – they’re already bought and paid for, and they’re mostly sitting idle. The incremental cost of assigning work to one of these machines is nearly zero. But in a centrally controlled system, new machines must be bought, and reserved for use in providing the service.

Second, P2P deals more efficiently with fluctuations in workload. The traffic in an online system varies a lot, and sometimes unpredictably. If you’re building a centrally-controlled system, you have to make sure that extra resources are available to handle surges in traffic; and that costs money. P2P, on the other hand, has the useful property that whenever you have more users, you have more users’ computers (and network connections) to put to work. The system’s capacity grows automatically whenever more capacity is needed, so you don’t have to pay extra for surge-handling capacity.

Third, P2P allows users to subsidize the cost of running the system, by having their computers do some of the work. In theory, users could subsidize a centrally-controlled system by paying money to the system operator. But in practice, monetary transfers can bring significant transaction costs. It can be cheaper for users to provide the subsidy in the form of computing cycles than in the form of cash. (A full discussion of this transaction cost issue would require more space – maybe I’ll blog about it someday – but it should be clear that P2P can reduce transaction costs at least sometimes.)

Of course, this doesn’t prove that P2P is always better, or that any particular P2P design in use today is motivated only by efficiency considerations. What it does show, I think, is that the relative efficiency of centrally-controlled and P2P designs is a complex and case-specific question, so that P2P designs should not be reflexively labeled as illegitimate.