Ernest Miller at LawMeme explains why there is so little fair use in DVD reviews.
Crackdown on Greek Gamers
Tauzin Circulating Draft "Broadcast Flag" Bill
Rep. Billy Tauzin is circulating a draft of a bill that would restrict digital technology. One effect of the bill would be to mandate “broadcast flag” technology.
The bill has not yet been introduced.
Misleading Term of the Week: "Trusted System"
The term “trusted system” is often used in discussing Digital Rights/Restrictions Management (DRM). Somehow the “trusted” part is supposed to make us feel better about the technology. Yet often the things that make the system “trusted” are precisely the things we should worry about.
The meaning of “trusted” has morphed at least twice over the years.
“Trusted system” was originally used by the U.S. Department of Defense (DoD). To DoD, a “trusted system” was any system whose security you were obliged to rely upon. “Trusted” didn’t say anything about how secure the system was; all it said was that you needed to worry about the system’s level of security. “Trusted” meant that you had placed your trust in the system, whether or not that trust was ill-advised.
Since trusted systems had more need for security, DoD established security criteria that any system would (theoretically) have to meet before being used as a trusted system. Vendors began to label their systems as “trusted” if those systems met the DoD criteria (and sometimes if the vendor hoped they would). So the meaning of “trusted” morphed, from “something you have to rely upon” to “something you are safe to rely upon.”
In the 1990s, “trusted” morphed again. Somebody (perhaps Mark Stefik) realized that they could make DRM sound more palatable by calling it “trusted.” Where “trusted” had previously meant that the system’s owner could rely on the system’s behavior, it now came to mean that somebody else could rely on its behavior. Often it meant that somebody else could force the system to behave contrary to its owner’s wishes.
Today “trusted” seems to mean that somebody has some kind of control over the system. The key questions to ask are who has control, and what kind of control they have. Depending on the answers to those questions, a “trusted” system might be either good or bad.
Lessig/DRM/End-To-End Debate: Resolved?
Larry Lessig and I had a brief blog-discussion last week about the meaning of the end-to-end principle(s), and how end-to-end applies to DRM. The discussion continued off-line, and we ended up in pretty close agreement. Here is my version of what we agree on:
(1) End-to-end is not a single principle, but a cluster of related principles. Some are engineering principles, and others are policy/economic principles. It is good to be clear about what version of end-to-end you are using.
(2) The MPAA/Hollings approach does harm by forcing all computers to implement certain functions, even though those functions are not needed by all law-abiding network users. This violates the engineering end-to-end principle that says that functions should not be required unless needed by all.
(3) The MPAA/Hollings approach does even more harm by forbidding a great many non-infringing functions from being implemented at all. This offends both engineering and policy versions of the end-to-end principle, all of which favor giving end users flexibility in how they use the network.
(4) DRM is generally a bad idea, but some DRM systems are worse than others.