October 30, 2024

Misleading Term of the Week: "Trusted System"

The term “trusted system” is often used in discussing Digital Rights/Restrictions Management (DRM). Somehow the “trusted” part is supposed to make us feel better about the technology. Yet often the things that make the system “trusted” are precisely the things we should worry about.

The meaning of “trusted” has morphed at least twice over the years.

“Trusted system” was originally used by the U.S. Department of Defense (DoD). To DoD, a “trusted system” was any system whose security you were obliged to rely upon. “Trusted” didn’t say anything about how secure the system was; all it said was that you needed to worry about the system’s level of security. “Trusted” meant that you had placed your trust in the system, whether or not that trust was ill-advised.

Since trusted systems had more need for security, DoD established security criteria that any system would (theoretically) have to meet before being used as a trusted system. Vendors began to label their systems as “trusted” if those systems met the DoD criteria (and sometimes if the vendor hoped they would). So the meaning of “trusted” morphed, from “something you have to rely upon” to “something you are safe to rely upon.”

In the 1990s, “trusted” morphed again. Somebody (perhaps Mark Stefik) realized that they could make DRM sound more palatable by calling it “trusted.” Where “trusted” had previously meant that the system’s owner could rely on the system’s behavior, it now came to mean that somebody else could rely on its behavior. Often it meant that somebody else could force the system to behave contrary to its owner’s wishes.

Today “trusted” seems to mean that somebody has some kind of control over the system. The key questions to ask are who has control, and what kind of control they have. Depending on the answers to those questions, a “trusted” system might be either good or bad.