November 23, 2024

Washington Post on Biometrics

Today’s Washington Post has an article about the use of biometric technology, and civil-liberties resistance against it.

Interestingly, the article conflates two separate ideas: biometrics (the use of physical bodily characteristics to identify someone), and covert identification (identifying someone in a public place without their knowledge or consent). There are good civil-liberties arguments against covert identification. But the overt use of biometrics, especially in situations where identification already is expected and required, such as entry to an airplane, should be much less controversial.

There might even be ways of using biometrics that are more protective of privacy than existing identification measures are. Sometimes, you might be more comfortable having your face scanned than you would be revealing your name. Biometrics could give you that choice.

By implicitly assuming that biometric systems will be covert, the article, and apparently some of the sources it quotes, are missing the real potential of biometrics.

(Caveat: Biometrics aren’t worth much if they can’t reliably identify people, and there are good reasons to question the reliability of some biometrics.)

Wireless Tracking of Everything

Arnold Kling at The Bottom Line points to upcoming technologies that allow the attachment of tiny tags, which can be tracked wirelessly, to almost anything. He writes:

In my view, which owes much to David Brin, we should be encouraging the use of [these tags], while making sure that no single agency or elite has a monopoly on the ability to engage in tracking. Brin’s view is that tracking ability needs to be symmetric. We need to be able to keep track of politicians, government officials, and corporate executives. The danger is living in a society where one side can track but not be tracked.

Kling’s vision is of a world where nearly every object emits a kind of radio beacon identifying itself, and where these beacons are freely observable, allowing any person or device to take a census of the objects around it. It’s easy to see how this might be useful. Whether it is wise is another question entirely (which I’ll leave aside for now).

One thing is for sure: this vision is wildly implausible. Yes, tracking technology is practical, and may be inevitable. But tracking technology will evolve quickly to make Kling’s vision impossible.

First-generation tracking technolgy works by broadcasting a simple beacon, detectable by anyone, saying something like, “Device #67532712 is here.” If that were the end of the technological story, Kling might be right.

Like all technologies, tracking tags will evolve rapidly. Later generations won’t be so open. A tag might broadcast its identity in encrypted form, so that only authorized devices can track it. It might “lurk,” staying quiet until an authorized device sends it a wakeup signal. It might gossip with other tags across encrypted channels. Rather than being a passive identity tag, it will be an active agent, doing whatever it is programmed to do.

Once this happens, economics will determine what can be tracked by whom. It will be cheap and easy to put a tag into almost anything, but tracking the tag will be impossible without getting a cryptographic secret key that only the owner of the object, or the distributor of the beacon, can provide. And this key will be provided only if doing so is in the interest of the provider.

It’s interesting to contemplate what kinds of products and services will develop in such a world. The one thing that seems pretty certain is that it won’t be the simple, open world that Kling envisions.

What Color Is My Hat?

An article by Rob Lemos at news.com discusses the differences between “white hat,” “gray hat,” and “black hat” hackers. The article lists me as a gray hat.

In my book, there is no such thing as a gray hat. If you break into a computer system without the owner’s permission, or if you infringe a copyright, then your hat is black. Otherwise your hat is white.

This article, like so many others, tries to pin the “gray hat” image on anyone whose actions make a technology vendor unhappy. That’s why the article classifies me as a gray hat – because my research made the RIAA unhappy.

As a researcher, my job is not to make vendors happy. My job is to discover the truth and report it. If the truth makes a vendor look good, that’s great. If the truth makes a vendor look bad, so be it.

Misleading Term of the Week: "Trusted System"

The term “trusted system” is often used in discussing Digital Rights/Restrictions Management (DRM). Somehow the “trusted” part is supposed to make us feel better about the technology. Yet often the things that make the system “trusted” are precisely the things we should worry about.

The meaning of “trusted” has morphed at least twice over the years.

“Trusted system” was originally used by the U.S. Department of Defense (DoD). To DoD, a “trusted system” was any system whose security you were obliged to rely upon. “Trusted” didn’t say anything about how secure the system was; all it said was that you needed to worry about the system’s level of security. “Trusted” meant that you had placed your trust in the system, whether or not that trust was ill-advised.

Since trusted systems had more need for security, DoD established security criteria that any system would (theoretically) have to meet before being used as a trusted system. Vendors began to label their systems as “trusted” if those systems met the DoD criteria (and sometimes if the vendor hoped they would). So the meaning of “trusted” morphed, from “something you have to rely upon” to “something you are safe to rely upon.”

In the 1990s, “trusted” morphed again. Somebody (perhaps Mark Stefik) realized that they could make DRM sound more palatable by calling it “trusted.” Where “trusted” had previously meant that the system’s owner could rely on the system’s behavior, it now came to mean that somebody else could rely on its behavior. Often it meant that somebody else could force the system to behave contrary to its owner’s wishes.

Today “trusted” seems to mean that somebody has some kind of control over the system. The key questions to ask are who has control, and what kind of control they have. Depending on the answers to those questions, a “trusted” system might be either good or bad.

Comments on White House Cybersecurity Plan

As a computer security researcher and teacher, I was interested to see the White House’s draft cybersecurity plan. It looks to be mostly harmless, but there are a few things in it that surprised me.

First, I was surprised at the strong focus on issues late in the product lifecycle. Security is an issue throughout the life of a product, from the initial conception of the product through its design, implementation, revision, use, and maintenance. The usual rule of thumb is that an ounce of prevention is worth a pound of cure – that attention to security early in the lifecycle makes a big difference later.

Despite this, the White House plan emphasizes remediation late in the lifecycle, over prevention earlier in the lifecycle. There is much discussion of intrusion detection, management, application of patches, and training of users; and not so much discussion of how products could be made more secure “out of the box.”

In the short run, these late-lifecycle methods are necessary, because it is too late to redo the early lifecycles of the products we are using today. But in the long run a big part of the answer has to lie in better product design, a goal to which the plan gives some lip service but not much concrete support.

The second surprise was the section on higher education (pp. 33-34 if you’re reading along at home).

Cybersecurity is a big mess, and there is plenty of blame to go around. You would expect the plan, as a political document, to avoid direct criticism of anyone, but instead to accentuate the positive by pointing to opportunities for improvement rather than inadequate performance. Indeed, that is the tone of most of the plan.

Universities alone seem to come in for direct criticism, having “many insecure systems” that “have been … exploited by hackers” thereby “[placing] other sectors at risk.” Contrast this with the section on “large enterprises” (pp. 19-22). Universities “have been” exploited; large enterprises “can be” exploited. Universities “place other sectors at risk”; large enterprises “can play a unique role in developing resiliency”.

But the biggest surprise in the higher education section is that there is no mention of the fact that computer security education and research are taking place at universities. The discussions of other stakeholders are careful to genuflect to those sectors’ worthy training and research efforts, but the higher education section is strangely silent. This despite the fact that many of the basic technologies whose adoption the report urges were invented at universities. (Think, for instance, of public key crypto.)

This general lack of attention to the educational system is evident elsewhere in the report too. Consider discussion point D4-12 (emphasis added):

How can government and private industry establish programs to identify early students with a demonstrated interest in and/or talent for IT security work, encourage and develop their interest and skills, and direct them into the workforce?

That’s what we do at America’s schools and universities: we help students identify their interests and talents, we encourage and develop those interests and skills, and ultimately we help students direct themselves into the workforce. On the whole I think we do a pretty good job of it. We’re happy to have the help of government and industry, but it’s a bit dismaying to see this identified as somebody else’s job.