April 24, 2014

avatar

Cloud(s), Hype, and Freedom

Richard Stallman’s recent description of ‘the cloud’ as ‘hype’ and a ‘trap’ seems to have stirred up a lot of commentary, but not a lot of clear discussion of the problems Stallman raised. This isn’t surprising- the term ‘the cloud’ has always been vague. (It was hard to resist saying ‘cloudy.’ ;) When people say ‘the cloud’ they are really lumping at least four ‘cloud types’ together.

traditional applications, hosted elsewhere

Probably the most common type of ‘cloud’ is a service that takes a traditional software functionality and moves it to remotely hosted, (typically) web-delivered servers. Gmail and salesforce.com are like this- fairly traditional email and CRM applications, ‘just’ moved to the web.

If Stallman’s ‘hype’ claim is valid anywhere, it is here. Administration and maintenance costs are definitely lower when an expert like Google funds and runs the server, and reliability may improve as well. But the core functionality of these apps, and the ability to access data over a network, have been present since the dawn of networked computing. On average, this is undoubtedly a significant change in quality, but only rarely a change in type- making the buzz much harder to justify.

Stallman’s ‘trap’ charge is more complex. Computer users have long compromised on personal control by storing data remotely but accessing it via standardized protocols. This introduced risks- you had to trust the data host and couldn’t tinker with the server- but kept some controls- you could switch clients, and typically you could export the data. Some web apps still strike that balance- for example, most gmail features are accessible via good old POP and IMAP. But others don’t.

Getting your data out of a service like salesforce can be a ‘hidden cost’ of an apparently free service, and even with a relatively standards-based service like gmail you have no freedom to make changes to the server. These risks are what Stallman means when he talks about a ‘trap’, and regardless of your conclusion about them, understanding them is important.

services involving data that can’t (yet) be managed locally

Google Maps and Google Search are the canonical examples of this type of cloud service- heaps of data so large that one would need a large data center to host your own copy and a very, very fat pipe to keep it up-to-date.

Hype-wise, these are a mixed bag. These services definitely bring radical new functionality that traditionally can’t exist- I can’t store all of google maps on my phone. That hype is justified. At the same time, our personal ability to store and process data is still growing quickly, so the claims that this type of cloud service will always ‘require’ remote servers may be overblown.

‘Trap’-wise? Dependence on these services reminds me of ‘dependence’ on a library before the internet- you can work to make sure your library respects your privacy, prefer public libraries to private ones, or establish a personal library if your reading interests are narrow, but in the end eschewing large libraries is likely to be a case of cutting off your nose to spite your face. We’re in the same state with this type of cloud service. You can avoid them, but those concerned with freedom might be better off understanding and fixing them than condemning them altogether.

services that make creation of new data technically or economically feasible

Facebook and wikipedia are the canonical examples here. Unlike the first two types of cloud, where data was available but inconvenient before it ended up in the cloud, this class of cloud applications creates information that wasn’t previously feasible to collect at all.

There may well not be enough hype around this type of cloud. Replicating web scale collaborative facilities like these will be very difficult to do in a p2p fashion, and the impact of the creation of new information (even when it is as mundane as facebook’s data often is) is hard to understate.

Like the previous type of cloud, it is hard to call these a trap per se- they do make it hard to leave, but they do so by providing new functionality that is very hard to get with any traditional software model.

services offering computing and storage, rather than data

The most recent type of cloud service is remotely provisioned computing and storage, like Amazon’s EC2/S3 and Google’s App Engine. This is perhaps the most purely generative type of cloud, allowing individuals to create new services and scale them out to service millions of people without having to invest in their own physical infrastructure. It is hard to see any way in which this can reasonably be called ‘hype,’ given the reach it allows individuals and small or transient groups to have which might otherwise cost them many thousands of dollars.

From a freedom perspective, these can be both the best and worst of the cloud types. On the plus side, these services can be incredibly transparent- developers who use them directly have access to their own source code, and end users may not know they are using them at all. On the down side, especially for proprietary platforms like App Engine, these can have very deep lock-in- it is complicated, expensive, and risky to switch deployment platforms after achieving success. And they replace traditional, very open platforms- a tradeoff that isn’t always appreciated.

takeaways

‘The cloud’ isn’t going away, but hopefully we can clarify our thinking about it by talking about the different types of clouds. Hopefully this post is a useful step in that direction.

[This post is an extension of some ideas I've been playing around with on my own blog and at the autonomo.us group blog; readers curious about these issues may want to read further in those places. I also recommend reading this piece, which set me on the (very long) road to this particular post.]

avatar

HBO Exec Wants to Rename DRM

People have had lots of objections to Digital Rights Management (DRM) technology – centering mainly on its clumsiness and the futility of its anti-infringement rationale – but until recently nobody had complained that the term “Digital Rights Management” was insufficiently Orwellian.

That changed on Tuesday, when HBO’s Chief Technology Officer, Bob Zitter, suggested at an industry conference that DRM needs a name change. Zitter’s suggested name: Digital Consumer Enablement, or DCE.

The irony here is that “rights management” is itself an industry-sponsored euphemism for what would more straightforwardly be
called “restrictions”. But somehow the public got the idea that DRM is restrictive, hence the need for a name change.

Zitter went on to discuss HBO’s strategy. HBO wants to sell shows in HighDef, but the problem is that many consumers are watching HD content using the analog outputs on their set-top boxes – often because their fancy new HD televisions don’t implement HBO’s favorite form of DRM. So what HBO wants is to disable the analog outputs on the set-top box, so consumers have no choice but to adopt HBO’s favored DRM.

Which makes the nature of the “enablement” clear. By enabling your set-top box to be incompatible with your TV, HBO will enable you to buy an expensive new TV. I understand why HBO might want this. But they ought to be honest and admit what they are doing.

I can think of several names for their strategy. “Consumer Enablement” is not one of them.

avatar

“Hacking” Revisited

I wrote yesterday about the degradation of the term “hacking”. Today, the perfect illustration of my point turned up: a Hacker’s Hall of Fame published by The Learning Channel. It includes legitimate uber-programmers like Ken Thompson and Dennis Ritchie, along with computer criminals like Kevin Mitnick and Vladimir Levin. Putting those guys on the same list is an insult to Thompson and Ritchie.

avatar

Time to Retire “Hacking”

Many confidential documents are posted mistakenly on the web, allowing strangers to find them via search engines, according to a front-page article by Yuki Noguchi in today’s Washington Post. I had thought this was common knowledge, but apparently it’s not.

The most striking aspect of the article, to me at least, is that doing web searches for such material is called “Google hacking.” This is yet another step in the slow decay of the once-useful word “hack”, whose meaning is now so vague that it is best avoided altogether.

Originally, “hacker” was a term of respect, applied only to the greatest of (law-abiding) software craftsmen. The first stage of the term’s decline began when online intruders started calling themselves “hackers,” and the press began using the term “hacking” to refer to computer intrusions. This usage tends to reinforce the (often false) impression that intrusions require great technical skill.

As a shorthand term for illegal computer intrusions, “hacking” was at least useful. But the second phase of its decline has drained away even that meaning, as “hacking” has lost its tie to illegality and has become a general-purpose label of disapproval that can be slapped onto almost any activity. Nowadays almost any lawsuit over on-line activity involves an accusation of “hacking,” and the term has become a favorite of lobbyists seeking to ban previously accepted practices. Who would oppose a ban on hacking?

Calling something “hacking” conveys nothing more than the speaker’s disapproval of it. If you’re trying to communicate clearly, it’s time to retire “hacking” from your lexicon. If you don’t like what somebody is doing, tell us why.

avatar

Standards vs. Regulation

The broadcast flag “debate” never ceases to amaze me. It’s a debate about technology, but in forum after forum the participants are all lawyers. And it takes place in a weird reality distortion field where certain technological non sequiturs pass for unchallenged truth.

One of these is that the broadcast flag is a technical “standard.” Even opponents of the flag have taken to using this term. As I have written before, there is a difference between standards and regulation, and the broadcast flag is clearly regulation.

For future reference, here is a handy chart you can use to distinguish standards from non-standards.

STANDARD NOT A STANDARD
written by engineers written by lawyers
voluntary mandatory
enables interoperation prevents interoperation
backed by technologists opposed by technologists

Simple, isn’t it?

UPDATE (March 7, 8:00 AM): On further reflection (brought on by the comments of readers, including Karl-Friedrich Lenz) I changed the table above. Originally the right-hand column said “regulation” but I now realize that goes too far.

avatar

Standards, or Collusion?

John T. Mitchell at InteractionLaw writes about the potential antitrust implications of backroom deals between copyright owners and technology makers.

If a copyright holder were to agree with the manufacturers of the systems for making lawful copies and of the systems for playing them to eliminate all trade in lawful copies unless each transaction (each resale, trade, gift or rental) has the consent of the copyright holder, there is of course no doubt that such agreement would constitute a naked restraint of trade. If, instead, the copyright holder agreed with the manufactures of copying and playing technologies to deploy a system which simply obeys the instructions of the copyright holder (including instructions which have the purpose and effect of eliminating the resale, trade, gift or rental of the copy, or of enlarging the copyright monopoly by charging for private performances), then the agreement to have technology automatically do the deed is certainly no better than the first. It is akin to a company saying to the prospective co-conspirator: “Listen, I can’t agree with you to do what you are asking because my lawyers tell me it would be illegal, so what I’ll do is program my machine to do what you tell it to do, but just don’t tell me.”

I understand that antitrust law is suspicious of backroom deals in which companies agree not to produce certain otherwise legal products, but that there are some exceptions for standard-setting. Perhaps that is why the various inter-industry groups try to dress up their agreements as “standards.” As I have written before, most of these agreements don’t look at all like technical standards, and to label them as such is misleading.

True technical standards are voluntary, and allow products to be more functional by giving them a way to interoperate (i.e., to work together). Most of the DRM “standards” are mandatory, and make products less functional by banning some kinds of interoperation.

Whether these agreements violate antitrust law is beyond my expertise, but I do know that a reasonable exemption for technical standard-setting ought not to apply to them.

avatar

Misleading Term of the Week: “Rights”

A “right” is a legal entitlement – something that the law says you are allowed to do. But the term is often misused to refer to something else.

Consider, for example, the use of “digital rights management” (often abbreviated as DRM) to describe technologies that restrict the use of creative works. In practice, the “rights” being managed are really just rules that the copyright owner wants to impose; and those rules may bear little relation to the parties’ legal rights. Cloaking these restrictions in the language of “rights” makes them sound more neutral and unchangeable than they really are.

DRM advocates often put forth arguments that go roughly like this:

(1) we have built technology that doesn’t let you do X;

(2) therefore you cannot do X;

(3) therefore you do not have the right to do X;

(4) therefore you should be required to use technology that doesn’t let you do X.

The trickiest part of this argument is getting from (2) to (3). Using the term “digital rights management” in (1) and (2) makes the leap from (2) to (3) seem smaller than it really is.

There is at least one more common misuse of “rights” in the copyright/technology debate. This is in the use of the term “rights holder” to refer to copyright owners (but not to users). When someone says, “Content is shipped from the rights holder to the consumer,” the implication is that the rights of the copyright owner are more important than those of the user. There is no need for this term “rights holder.” “Copyright owner” will do just fine, and it will help us remember that both parties in the transaction have rights that need to be protected.

avatar

Misleading Term of the Week: “Standard”

A “standard” is a technical specification that allows systems to work together to make themselves more useful. Most people say, for good reasons, that they are in favor of technical standards. But increasingly, we are seeing the term “standard” misapplied to things that are really regulations in disguise.

True standards strive to make systems more useful, by providing a voluntary set of rules that allow systems to understand each other. For example, a standard called RFC822 describes a standardized way to format email messages. If my email-sending software creates RFC822-compliant messages, and your email-receiving software understands RFC822-compliant messages, then you can read the email messages that I send you. Compliance with such a standard makes our software more functional.

Crucially, standards like RFC822 are voluntary and nonexclusive. Nobody forces any email-software vendor to comply with RFC822, and there is nothing to stop a vendor’s product from complying simultaneously with both RFC822 and other standards.

Lately we have seen the word “standard” misapplied. For example, the Broadcast Protection Discussion Group (BPDG) calls its proposal a “standard,” though it is anything but. Unlike a real standard, BPDG is not voluntary. Unlike a real standard, it contains prohibitions rather than opportunities. Put the BPDG “standard” in front of experienced engineers, and they’ll tell you that it looks like a regulation, not like a standards document. BPDG is trying to make its restrictive regulations more palatable by wrapping them in the mantle of “standards.”

A more subtle misuse of “standard” arises in claims that we need to standardize on DRM technology. As I wrote previously:

In an attempt to sweep [the technical infeasibility of DRM] under the rug, the content industry has framed the issue cleverly as one of standardization. This presupposes that there is a menu of workable technologies, and the only issue is which of them to choose. They want us to ask which technology is best. But we should ask another question: Are any of these technologies workable in the first place? If not, then a standard for copy protection is as premature as a standard for teleportation.

avatar

Misleading Term of the Week: “Trusted System”

The term “trusted system” is often used in discussing Digital Rights/Restrictions Management (DRM). Somehow the “trusted” part is supposed to make us feel better about the technology. Yet often the things that make the system “trusted” are precisely the things we should worry about.

The meaning of “trusted” has morphed at least twice over the years.

“Trusted system” was originally used by the U.S. Department of Defense (DoD). To DoD, a “trusted system” was any system whose security you were obliged to rely upon. “Trusted” didn’t say anything about how secure the system was; all it said was that you needed to worry about the system’s level of security. “Trusted” meant that you had placed your trust in the system, whether or not that trust was ill-advised.

Since trusted systems had more need for security, DoD established security criteria that any system would (theoretically) have to meet before being used as a trusted system. Vendors began to label their systems as “trusted” if those systems met the DoD criteria (and sometimes if the vendor hoped they would). So the meaning of “trusted” morphed, from “something you have to rely upon” to “something you are safe to rely upon.”

In the 1990s, “trusted” morphed again. Somebody (perhaps Mark Stefik) realized that they could make DRM sound more palatable by calling it “trusted.” Where “trusted” had previously meant that the system’s owner could rely on the system’s behavior, it now came to mean that somebody else could rely on its behavior. Often it meant that somebody else could force the system to behave contrary to its owner’s wishes.

Today “trusted” seems to mean that somebody has some kind of control over the system. The key questions to ask are who has control, and what kind of control they have. Depending on the answers to those questions, a “trusted” system might be either good or bad.

avatar

Misleading Term of the Week: “Content Owner”

Many discussions of copyright refer to “content owners.” The language of ownership is often misused in these contexts, for example by saying that Disney “owns” The Lion King, or by saying that I “own” the content on this site.

The simple fact is that I don’t own the content on this site – at least not in the same way that I own my car. All I own is the copyright on the content. The copyright gives me a certain limited bundle of rights, and leaves for you, the reader, certain other rights, whether I like it or not. Using the rhetoric of “content ownership” confuses the issue, by falsely implying that the copyright owners have more rights than the law really gives them.

(It’s relatively harmless to refer to “my book” or “my film,” as long as everybody understands that you’re not claiming ownership of the content but merely stating a relationship, just as you might refer to “my brother” or “my hometown” without implying that you own either one.)