October 9, 2024

Cost Tradeoffs of P2P

On Thursday, I jumped in to a bloggic discussion of the tradeoffs between centrally-controlled and peer-to-peer design strategies in distributed systems. (See posts by Randy Picker (with comments from Tim Wu and others), Lior Strahilevitz, me, and Randy Picker again.)

We’ve agreed, I think, that large-scale online services will be designed as distributed systems, and the basic design choice is between a centrally-controlled design, where most of the work is done by machines owned by a single entity, and a peer-to-peer design, where most of the work is done by end users’ machines. Google is a typical centrally-controlled design. BitTorrent is a typical P2P design.

The question in play at this point is when the P2P design strategy has a legitimate justification. Which justifications are “legitimate”? This is a deep question in general, but for our purposes it’s enough to say that improving technical or economic efficiency is a legitimate justification, but frustrating enforcement of copyright is not. Actions that have legitimate justifications may also have harmful side-effects. For now I’ll leave aside the question of how to account for such side-effects, focusing instead on the more basic question of when there is a legitimate justification at all.

Which design is more efficient? Compared to central control, P2P has both disadvantages and advantages. The main disadvantage is that in a P2P design, the computers participating in the system are owned by people who have differing incentives, so they cannot necessarily be trusted to work toward the common good of the system. For example, users may disconnect their machines when they’re not using the system, or they may “leech” off the system by using the services of others but refusing to provide services. It’s generally harder to design a protocol when you don’t trust the participants to play by the protocol’s rules.

On the other hand, P2P designs have three main efficiency advantages. First, they use cheaper resources. Users pay about the same price per unit of computing and storage as a central provider would pay. But the users’ machines a sunk cost – they’re already bought and paid for, and they’re mostly sitting idle. The incremental cost of assigning work to one of these machines is nearly zero. But in a centrally controlled system, new machines must be bought, and reserved for use in providing the service.

Second, P2P deals more efficiently with fluctuations in workload. The traffic in an online system varies a lot, and sometimes unpredictably. If you’re building a centrally-controlled system, you have to make sure that extra resources are available to handle surges in traffic; and that costs money. P2P, on the other hand, has the useful property that whenever you have more users, you have more users’ computers (and network connections) to put to work. The system’s capacity grows automatically whenever more capacity is needed, so you don’t have to pay extra for surge-handling capacity.

Third, P2P allows users to subsidize the cost of running the system, by having their computers do some of the work. In theory, users could subsidize a centrally-controlled system by paying money to the system operator. But in practice, monetary transfers can bring significant transaction costs. It can be cheaper for users to provide the subsidy in the form of computing cycles than in the form of cash. (A full discussion of this transaction cost issue would require more space – maybe I’ll blog about it someday – but it should be clear that P2P can reduce transaction costs at least sometimes.)

Of course, this doesn’t prove that P2P is always better, or that any particular P2P design in use today is motivated only by efficiency considerations. What it does show, I think, is that the relative efficiency of centrally-controlled and P2P designs is a complex and case-specific question, so that P2P designs should not be reflexively labeled as illegitimate.

Comments

  1. Great discussion guys. Really enjoyed reading this post. On a side note Ed, you have great blog and I have been a frequent visiter for the past few weeks. I am going to add a link to your blog on Future of P2P Blog

  2. Alen Peacock says

    “The main disadvantage is that in a P2P design, the computers participating in the system are owned by people who have differing incentives, so they cannot necessarily be trusted to work toward the common good of the system.”

    Indeed. We are now seeing, however, P2P systems that enforce fairness to differing degrees. BitTorrent is a great example — you simply can’t cheat the protocol in any way that really causes benefit to your own download.

    As P2P application development matures, we’ll see more of this — protocols that implement fairness-enforcing features. Today’s P2P designers have to start out knowing that others will write software that implements their protocol, often in buggy and even malicious ways. Starting out with this assumption requires a more hardened design from the very beginning. This is a fact of life in the P2P world that, yes, makes the design a bit more involved to begin with, but ultimately results in a way of writing software that is more correct and more resilient to a host of attacks — attacks which current design methodologies (relying on simpler assumptions about the environment) remain vulnerable to.

    I consider that to be one of P2P’s great (emerging) advantages. The software has to work in an environment that is inherently more unfriendly than that of traditionally distributed applications, and thus has to be stronger and fitter than traditional distributed applications.