October 13, 2024

Twittering for the Marines

The Marines recently issued an order banning social network sites (Facebook, MySpace, Twitter, etc.). The Pentagon is reviewing this sort of thing across all services. This follows on the heels of a restrictive NFL policy along the same lines. Slashdot has a nice thread, where among other things, we learn that some military personnel will contract with off-base ISPs for private Internet connections.

There are really two separate security issues to be discussed here. First, there’s the issue that military personnel might inadvertently leak information that could be used by their adversaries. This is what the NFL is worried about. The Marines order makes no mention of such leaks, and they would already be covered by rules and regulations, never mind continuing education (see, e.g., loose lips sink ships). Instead, our discussion will focus on the issue explicitly raised in the order: social networks as a vector for attackers to get at our military personnel.

For starters, there are other tools and techniques that can be used to protect people from visiting malicious web sites. There are black-list services, such as Google’s Safe Browsing, built into any recent version of Firefox. There are also better browser architectures, like Google’s Chrome, that isolate one part of the browser from another. The military could easily require the use of a specific web browser. The military could go one step further and provide sacrificial virtual machines, perhaps running on remote hosts and shared by something like VNC, to allow personnel to surf the public Internet. A solution like this seems infinitely preferable to forcing personnel to use third-party ISPs on personal computers, where vulnerable machines may well be compromised, yet go unnoticed by military sysadms. (Or worse, the ISP could itself be compromised, giving a huge amount of intel to the enemy; contrast this with the military, with its own networks and its own crypto, which presumably is designed to leak far less intel to a local eavesdropper.)

Even better, the virtual machine / remote display technique allows the military sysadm to keep all kinds of forensic data. Users’ external network behavior creates a fantastic honeynet for capturing malicious payloads. If your personnel are being attacked, you want to have the evidence in hand to sort out who the attacker is and why you’re being attacked. That helps you block future attacks and formulate any counter-measures you might take. You could do this just as well for email programs as web browsing. Might not work so well for games, but otherwise it’s a pretty powerful technique. (And, oh by the way, we’re talking about the military here, so personnel privacy isn’t as big a concern as it might be in other settings.)

It’s also important to consider the benefits of social networking. Military personnel are not machines. They’re people with spouses, children, and friends back home. Facebook is a remarkably efficient way to keep in touch with large numbers of friends without investing large amounts of time — ideal for the Marine, back from patrol, to get a nice chuckle when winding down before heading off to sleep.

In short, it’s problematic to ban social networking on “official” machines, which only pushes personnel to use these things on “unofficial” machines with “unofficial” ISPs, where you’re less likely to detect attacks and it’s harder to respond to them. Bring them in-house, in a controlled way, where you can better manage security issues and have happier personnel.

AP's DRM Announcement: Much Ado About Nothing

Last week the Associated Press announced it would be developing some kind of online news registry to control use of news content. From AP’s press release:

The registry will employ a microformat for news developed by AP and which was endorsed two weeks ago by the Media Standards Trust, a London-based nonprofit research and development organization that has called on news organizations to adopt consistent news formats for online content. The microformat will essentially encapsulate AP and member content in an informational “wrapper” that includes a digital permissions framework that lets publishers specify how their content is to be used online and which also supplies the critical information needed to track and monitor its usage.

The registry also will enable content owners and publishers to more effectively manage and control digital use of their content, by providing detailed metrics on content consumption, payment services and enforcement support. It will support a variety of payment models, including pay walls.

It was hard to make sense of this, so I went looking for more information. AP posted a diagram of the system, which only adds to the confusion — your satisfaction with the diagram will be inversely proportional to your knowledge of the technology.

As far as I can tell, the underlying technology is based on hNews, a microformat for news, shown in the AP diagram, that was announced by AP and the Media Standards Trust two weeks before the recent AP announcement.

Unfortunately for AP, the hNews spec bears little resemblance to AP’s claims about it. hNews is a handy way of annotating news stories with information about the author, dateline, and so on. But it doesn’t “encapsulate” anything in a “wrapper”, nor does it do much of anything to facilitate metering, monitoring, or paywalls.

AP also says that hNews ” includes a digital permissions framework that lets publishers specify how their content is to be used online”. This may sound like a restrictive DRM scheme, aimed at clawing back the rights copyright grants to users. But read the fine print. hNews does include a “rights” field that can be attached to an article, but the rights field uses ccREL, the Creative Commons Rights Expression Language, whose definition states unequivocally that it does not limit users’ rights already granted by copyright and can only convey further rights to the user. Here’s the ccREL definition, page 9:

Here are the License properties defined as part of ccREL:

  • cc:permits — permits a particular use of the Work above and beyond what default copyright law allows.
  • cc:prohibits — prohibits a particular use of the Work, specifically affecting the scope of the permissions provided by cc:permits (but not reducing rights granted under copyright).

It seems that there is much less to the AP’s announcement than meets the eye. If there’s a story here, it’s in the mismatch between the modest and reasonable underlying technology, and AP’s grandiose claims for it.

What Economic Forces Drive Cloud Computing?

You know a technology trend is all-pervasive when you see New York Times op-eds about it — and this week saw the first Times op-ed about cloud computing, by Jonathan Zittrain. I hope to address JZ’s argument another day. Today I want to talk about a more basic issue: why we’re moving toward the cloud.

(Background: “Cloud computing” refers to the trend away from services provided by software running on standalone personal computers (“clients”), toward services provided across the Net with data stored in centralized data centers (“servers”). GMail and HotMail provide email in the cloud, Flickr provides photo albums in the cloud, and so on.)

The conventional wisdom is that functions are moving from the client to the server because server-side computing resources (storage, computation, and data transfer) are falling in cost, relative to the cost of client-side resources. Basic economics says that if a product uses two inputs, and the relative costs of the inputs change, production will shift to use more of the newly-cheap input and less of the newly-expensive one — so as server-side resources get relatively cheaper, designs will start to use more server-side resources and fewer client-side resources. (In fact, both server- and client-side resources are getting cheaper, but the argument still works as long as the cost of server-side resources is falling faster, which it probably is.)

This argument seems reasonable — and smart people have repeated it — but I think it misses the most important factors driving us into the cloud.

For starters, the standard argument assume that a move into the cloud simply relocates functions from client to server — we’re consuming the same resources, just consuming them in a place where they’re cheaper. But if you dig into the details, it looks like the cloud approach may use a lot more resources.

Rather than storing data on the client, the cloud approach often replicates data, storing the data on both server and client. If I use GMail on my laptop, my messages are stored on Google’s servers and on my laptop. Beyond that, some computation is replicated on both client and server, and we mustn’t forget that it’s less resource-efficient to provide computing inside a web browser than on the raw hardware. Add all of this up, and we might easily find that a cloud approach, uses more client-side resources than a client-only approach.

Why, then, are we moving into the cloud? The key issue is the cost of management. Thus far we focused only on computing resources such as storage, computation, and data transfer; but the cost of managing all of this — making sure the right software version is installed, that data is backed up, that spam filters are updated, and so on — is a significant part of the picture. Indeed, as the cost of computing resources, on both client and server sides, continues to fall rapidly, management becomes a bigger and bigger fraction of the total cost. And so we move toward an approach that minimizes management cost, even if that approach is relatively wasteful of computing resources. The key is not that we’re moving computation from client to server, but that we’re moving management to the server, where a team of experts can manage matters for many users.

This is still a story about shifts in the relative costs of inputs. The cost of computing is getting cheaper (wherever it happens), so we’re happy to use more computing resources in order to use our relatively expensive management inputs more efficiently.

What does tell us about the future of cloud services? That question will have to wait for another day.

Lessons from Amazon's 1984 Moment

Amazon got some well-deserved criticism for yanking copies of Orwell’s 1984 from customers’ Kindles last week. Let me spare you the copycat criticism of Amazon — and the obvious 1984-themed jokes — and jump right to the most interesting question: What does this incident teach us?

Human error was clearly part of the problem. Somebody at Amazon decided that repossessing purchased copies of 1984 would be a good idea. They were wrong about this, as both the public reaction and the company’s later backtracking confirm. But the fault lies not just with the decision-maker, but also with the factors that made the decision more likely, including some aspects of the technology itself.

Some put the blame on DRM, but that’s not the problem here. Even if the Kindle used open formats and let you export and back up your books, Amazon could still have made 1984 disappear from your Kindle. Yes, some users might have had backups of 1984 stored elsewhere, but most users would have lost their only copy.

Some blame cloud computing, but that’s not precisely right either. The Kindle isn’t really a cloud device — the primary storage, computing and user interface for your purchased books are provided by your own local Kindle device, not by some server at Amazon. You can disconnect your Kindle from the network forever (by flipping off the wireless network switch on the back), and it will work just fine.

Some blame the fact that Amazon controls everything about the Kindle’s software, which is a better argument but still not quite right. Most PCs are controlled by a single company, in the sense that that company (Microsoft or Apple) can make arbitrary changes to the software on the PC, including (in principle) deleting files or forcibly removing software programs.

The problem, more than anything else, is a lack of transparency. If customers had known that this sort of thing were possible, they would have spoken up against it — but Amazon had not disclosed it and generally does offer clear descriptions of how the product works or what kinds of control the company retains over users’ devices.

Why has Amazon been less transparent than other vendors? I’m not sure, but let me offer two conjectures. It might be because Amazon controls the whole system. Systems that can run third-party software have to be more open, in the sense that they have to tell the third-party developers how the system works, and they face some pressure to avoid gratuitous changes that might conflict with third-party applications. Alternatively, the lack of transparency might be because the Kindle offers less functionality than (say) a PC. Less functionality means fewer security risks, so customers don’t need as much information to protect themselves.

Going forward, Amazon will face more pressure to be transparent about the Kindle technology and the company’s relationship with Kindle buyers. It seems that e-books really are more complicated than dead-tree books.

A Freedom-of-Speech-based Approach To Limiting Filesharing – Part III: Smoke, smoke!

Over the past two days we have seen that filesharing is vulnerable to spamming, and that as a defense, the filesharers have used the IP block list to exclude the spammers from sharing files. Today I discuss how I think lawyers and laypeople should look at the legal issues. Since I am most decidedly not a lawyer, nothing I say here should be considered definitive. Hopefully, it is at least interesting.

An analogy:

Washington Square, in New York City, was for many years a place where drugs were sold. A fellow would stand around quietly saying to passersby “Smoke, smoke!” However, this so-called “steerer” held no drugs. His role was simply to direct the buyer to the “pitcher”, who had the drugs somewhere nearby, and who kept silent.

Even the strongest defender of free-speech rights understands that the “steerer’s” words are not just speech. His words are not similar to those of this article, though both simply say that someone in the park is selling. He is as legally responsible for the sale as the “pitcher”, because they are, according to legal terminology, “acting in concert”. He is a drug dealer who may never touch any drugs. Note also that the “steerer” receives payments from the illegal transactions – though it is not in fact legally necessary to be able to prove the payments to establish that he’s “acting in concert”. All that’s required is that the “steerer” and the “pitcher” share “community of purpose” in facilitating the illegal transaction.

In the Napster case, the court held that Napster, even though it did not have any copyrighted data on its servers, was liable for contributory infringement. To use Napster, a downloader would login to Napster’s central server, which connected the user to another user who had a file that was being searched for. Since it was Napster’s role to hook up the parties illegally exchanging files, it is reasonable to see this as analogous to the “steerer” in Washington Square – Napster didn’t have the infringing materials, but that really isn’t a defense.

The gnutella network is decentralized to solve the legal problem presented by the Napster decision. Nonetheless, there is something still centralized in gnutella: the IP block list. Users of LimeWire get their block list from LimeWire and only from LimeWire. Accordingly, if Napster was like the “steerer” in Washington Square, LimeWire furthers the “community of purpose” in a different way; it is someone who gives negative information rather than affirmative. He’s someone paid to stand in the park pointing out who are cheaters selling bad drugs, allowing the purchasers to find the good stuff.

What is a legitimate P2P spam filtering authority versus one that shares “community of purpose” with infringers? The former could legitimately act to keep the network from being flooded by those selling weight loss drugs, without facilitating infringing. There is probably no bright-line rule, but it is reasonably clear that LimeWire is well on the wrong side of any possible grey area.

It’s useful to compare gnutella spam cop LimeWire with e-mail spam cop AOL.

LimeWire does not clearly advertise its spam cop role as a feature of its software, and does not discuss its block list. (The LimeWire web site has only the cryptic description “We’re always working to protect you from viruses and unwanted sharing.”) There is no discussion anywhere about what sorts of sites and files it is blocking and for what reason. No notification is given by LimeWire to a site when it is blocked, nor is there any way given to contact LimeWire to remove yourself from the block list.

In comparison, blocking e-mail spam is, for AOL, a major selling point. AOL does not block bulk e-mailers (many of which are legitimate) on a whim. Every e-mail rejected by AOL is bounced with a notification to the sender, and there are detailed instructions to bulk e-mailers as to what they need to do to avoid running afoul of AOL’s filters. There is a way to contact AOL to remove oneself from the block list, if one is legitimate. The whole process is transparent.

It is clear that a legitimate spam cop cannot block spoofers, since any search for a non-infringing file would be unmolested by spoofs, yet it appears that LimeWire does block MediaDefender. In fact, LimeWire appears to be quietly promising to do so, when it says that it protects against “unwanted sharing”, whatever that is.

Lastly, it appears that LimeWire’s statements in court conceal what it is doing.

As we mentioned in the first post, there is an ongoing case, Arista v Lime Group. In its motion for Summary Judgement, LimeWire states

Likewise, LW does not have the ability to control the manner in which users employ the LimeWire software. Unlike the Napster defendants, LW does not maintain central servers containing files or indices of files. … LW’s system is like that analysed by the Ninth Circuit in Grokster, “truly decentralized”. … LW no more controls the actions of its customers than do any of the thousands of companies that provide hardware or other software used in connection with the internet.

This omits any discussion of LimeWire’s centralized block list. LW assuredly does control the manner in which LimeWire users employ the LimeWire software, because if a site is added to the IP block list, it is no longer visible to most LimeWire users. This is very far from the normal situation applying in other software used in connection with the internet.

Moreover, the plaintiffs’ attorneys appear to be unaware of the blocking of spoofs, as their reply motion makes no mention of it (nor the other hidden features of LimeWire software discussed yesterday).

While it might be possible to run a legitimate spam-blocking service for P2P networks, it would look rather different from what LimeWire is doing.

Conclusion

The best way to regulate filesharing effectively is to analyze the various players’ roles on free-speech grounds. The individual filesharers (when they share infringing material) are certainly violating the law, but in a small way that probably can’t be reasonably controlled. The publishers of the software that allows the network to run (including LimeWire) are exercising free speech – the fact that their code can be made to do something illegal should be irrelevant. However, LimeWire is facilitating infringing because of the way it runs its IP block list. If LimeWire were shut down, the gnutella network become useless for downloading infringing music. Because of their actions to keep the network safe for infringers – their “acting in concert” – LimeWire should be liable for contributory infringement.

This course will avoid free speech restrictions that trouble many. In terms of preventing infringing, it also will be far more productive than trying to target the small fish. It is an effective measure that respects rights.

[This series of posts has been a somewhat shortened version of an article here.]