May 24, 2024

Is BayTSP a Cyber-Trespasser?

Next week in my “IT and the Law” course, we’re discussing cyber-trespass. Reading the course materials got me to wondering whether BayTSP might be a cyber-trespasser.

BayTSP is a small company that works for copyright holders, monitoring the contents of P2P networks. Among other things, they query individual computers on the P2P networks, to see what they contain. Are those queries trespasses?

The closest case is probably eBay v. Bidder’s Edge, in which a Federal judge granted a preliminary injunction that stopped Bidder’s Edge from using a web crawler to access eBay’s site. The judge found it likely that the automated accesses by Bidder’s Edge to eBay’s site were trespasses. And it wasn’t that Bidder’s Edge was hammering eBay’s site with so many requests that the site’s reliability or response time were affected – the impact of the accesses was minimal, but the judge found that that was enough to get over the legal bar.

To be precise, eBay claimed that the accesses constituted “trespass to chattels”, a legal term that is defined roughly as intentionally messing around with somebody else’s stuff in a way that causes damage. It’s a step, but not a huge one, from the Bidder’s Edge ruling to a claim that BayTSP’s activity constitutes trespass to chattels. It’s far from certain that a court would take that step; and bear in mind that the Bidder’s Edge ruling was criticized by many at the time.

BayTSP argues that what they are doing is legitimate, because P2P users are publishing the information for anybody to see, and BayTSP is only doing what any member of the public could do. That argument seems pretty strong. But Bidder’s Edge made the same argument, and it wasn’t enough to save them.

My guess is that a lawsuit against BayTSP, even if brought by a sympathetic plaintiff, would be a long shot. And I think such a lawsuit probably should fail, just as the Bidder’s Edge ruling should have gone the other way.


  1. I have been researching this topic for two years now and have just finished a paper to be published on the topic of the privacy implications inherent in the methods being used by BayTSP/RIAA/MPAA to monitor and detect potentially-infringing P2P activities. At twenty pages this thing will need to be dropped down to 8-10 pages, plus I need to find a suitable conference in which to present it. Any ideas folks?!?At any rate, the first and foremost problem with all of these RIAA lawsuits and P2P ruckus is this: no one bothers to find out the purposes for downloading multimedia files prior to the assumption of guiltiness. According to legally-established precidence, the Copyright Act allows and encourages the “fair use” of any copyrighted materials, including movies and songs. In addition, I have no sympathy for big corporations that seek to treat their customers so poorly and try to beat back the tide of technology instead of embracing it with open arms.(remove the NOSPAM. in order to email me)

  2. I don’t agree with the “What if everybody did that?” argument. Under this argument, we should outlaw the posting of links on popular websites. The “distributed denial-of-service” effects are so well know that being “slashdotted” has become as popular a term as being “googled”. There are multiple search engine companies who all try to troll the web servers on the Internet and yet somehow the popular sites are still able to withstand the “assault”. The fact is that eBay shouldn’t be complaining, it should be happy to have business driven towards it. The real complaint of eBay was not the business, it was the competition. But competition is supposed to be a good thing in our capitalist society, not something to be prevented.

  3. A counter-example to eBay v. Bidder’s Edge would be the decision in Intel v Hamidi where the court held that absent injury to the computer systems, that it was not trespass to chattels.

    I think it would be a good thing to file a suit against BayTSP, assuming as you do that BayTSP would be found not guilty, because it brings the opportunity to gain precedents that can be used to defend ourselves against future corporate tyrannies. Basically, if it is okay for BayTSP to search private citizen’s sites and use what is found against them, then it should be okay for crackers to search corporate sites and use what is found against them.

  4. Myself, I’d argue that the “What if everybody did it?” problem has to be viewed against some sort of “reasonably foreseeable” standard. The first time the problem comes up, the defendant is always going to be able to assert “What’s the evidence that my straw is causing any harm? No evidence! It’s one straw, just one little straw, on your server.”

    But if you require something to break – or at least be in danger of breaking – before having a ruling, then by the time you get into court, it’s likely to be too late. All the straws are going to become a denial-of-service attack.

    I’m completely unconvinced that there was room for just a few companies in that niche. Cached auction lookups are an obvious feature to incorporate into a product.

    On the other hand, it’s clear how this can become a cudgel. Especially if you’ve ever had to deal with an anti-spam crank talking about trespass to chattels, for what is objectively, trivia. Which is why I half-jokingly mentioned a compulsory license. Because at heart, the problem for Bidder’s Edge/eBay is cost-shifting of the database maintenance. That’s how it’s distinguishable from BayTSP.

  5. One thing I told Mark Ishikawa at the conference was that he should worry about the right to reverse engineer. Despite a number of court cases generally suggesting that software reverse engineering is a legitimate activity, lawyers keep proliferating theories that claim that particular acts of reverse engineering are actually forbidden. BayTSP plainly needs to reverse engineer protocols (and, frequently, clients) in order to interoperate for evidence-gathering purposes. Under the theories of several recent and current lawsuits, the developers of the original P2P clients might have a claim against BayTSP for doing so. I think it would be a bad result if a court tried to block BayTSP from reverse engineering, but I think it’s a risk that they really need to keep in mind.

  6. eBay actually argued that any measurable load on its servers would have constituted a trespass. The judge didn’t quite say how much load was too much, so that issue is still up in the air. According to Dan Burk’s article “The Trouble with Trespass”, trespass to chattels requires “substantial” interference with the chattel; mere “trivial” interference is not enough.

    One of the most criticized aspects of the judge’s ruling was his “What if everybody did it?” reasoning. The counterargument was that there was no evidence that this danger existed. Hardly anybody was crawling eBay, hardly anybody had asked permission to do so, and it was clear that there was room for two companies at most in the market niche that Bidder’s Edge was trying to occupy.

    It’s an interesting question whether it is a cyber-trespass to disobey a robots.txt file. You can make an argument either way based on the Bidder’s Edge ruling.

  7. Looking over the case, it appears that the main issue was the load on eBay’s servers. eBay wanted Bidder’s Edge only to do an eBay search when a user asked for it. But BE was instead doing frequent global searches and caching the results, so it could be more responsive to its customers. eBay’s complaint was that this was putting too much load on its system. And as Seth says, the concern was that if BE got away with it, everyone else would, too, and it would become a serious problem.

    It’s not clear how to translate this to P2P because there’s not that as much distinction between “legitimate” and “bad” P2P accesses as in this case. (Ironically, here “legitimate” means for the most part illegal violations of copyright, and “bad” means efforts to enforce the legal rights of copyright holders.) Unlike with eBay, one search is much like another.

    Although it didn’t play much part in the decision, eBay had set up robots.txt exclusion files, which are supposed to say that no automated searchers are allowed, only accesses directly initiated by humans. Maybe something like that would work for P2P? On the other hand, forbidding robots from P2P networks would close down an interesting potential area for innovation, reducing our “freedom to tinker”, as it were. So I don’t think you would support taking things in that direction.

  8. The basic problem with these situations is the age-old “What if everybody did that?”

    I think there’s a distinction to be made in that it’s easy to envision hundreds of companies trying to launch aftermarket services from querying eBay. But there’s aren’t likely to be that number of companies trying to monitor P2P files.

    Hmm … maybe the solution is compulsory licensing of querying? (I originally meant this as a joke, but on second thought, maybe it’s a good idea).