December 28, 2024

The Role of Worst Practices in Insecurity

These days, security advisors talk a lot about Best Practices: establishes procedures that are generally held to yield good results. Deploy Best Practices in your organization, the advisors say, and your security will improve. That’s true, as far as it goes, but often we can make more progress by working to eliminate Worst Practices.

A Worst Practice is something that most of us do, even though we know it’s a bad idea. One current Worst Practice is the way we use passwords to authenticate ourselves to web sites. Sites’ practices drive users to re-use the same password across many sites, and to expose themselves to phishing and keylogging attacks. We know we shouldn’t be doing this, but we keep doing it anyway.

The key to addressing Worst Practices is to recognize that they persist for a reason. If ignorance is the cause, it’s not a Worst Practice — remember that Worst Practices, by definition, are widely known to be bad. There’s typically some kind of collective action problem that sustains a Worst Practice, some kind of Gordian Knot that must be cut before we can eliminate the practice.

This is clearly true for passwords. If you’re building a new web service, and you’re deciding how to authenticate your users, passwords are the easy and obvious choice. Users understand them; they don’t require coordination with any other company; and there’s probably a password-handling module that plugs right into your development environment. Better authentication will be a “maybe someday” feature. Developers make this perfectly rational choice every day — and so we’re stuck with a Worst Practice.

Solutions to this and other Worst Practices will require leadership by big companies. Google, Microsoft, Facebook and others will have to step up and work together to put better practices in place. In the user authentication space we’re seeing some movement with new technologies such as OpenID which reduce the number of places users must log into, thereby easing the move to better practices. But on this and other Worst Practices, we have a long way to go.

Which Worst Practices annoy you? And what can be done to address them? Speak up in the comments.

DARPA Pays MIT to Pay Someone Who Recruited Someone Who Recruited Someone Who Recruited Someone Who Found a Red Balloon

DARPA, the Defense Department’s research arm, recently sponsored a “Network Challenge” in which groups competed to find ten big red weather balloons that were positioned in public places around the U.S. The first team to discover where all the balloons were would win $40,000.

A team from MIT won, using a clever method of sharing the cash with volunteers. MIT let anyone join their team, and they paid money to the members who found balloons, as well as the people who recruited the balloon-finders, and the people who recruited the balloon-finder-finders. For example, if Alice recruited Bob, and Bob recruited Charlie, and Charlie recruited Diane, and Diane found a balloon, then Alice would get $250, Bob would get $500, Charlie would get $1000, and Diane would get $2000. Multi-level marketing meets treasure hunting! It’s the Amway of balloon-hunting!

On DARPA’s side, this was inspired by the famous Grand Challenge and Urban Challenge, in which teams built autonomous cars that had to drive themselves safely through a desert landscape and then a city.

The autonomous-car challenges have obvious value, both for the military and in ordinary civilian life. But it’s hard to say the same for the balloon-hunting challenge. Granted, the balloon-hunting prize was much smaller, but it’s still hard to avoid the impression that the balloon hunt was more of a publicity stunt than a spur to research. We already knew that the Internet lets people organize themselves into effective groups at a distance. We already knew that people like a scavenger hunt, especially if you offer significant cash prizes. And we already knew that you can pay Internet strangers to do jobs for you. But how are we going to apply what we learned in the balloon hunt?

The autonomous-car challenge has value because it asks the teams to build something that will eventually have practical use. Someday we will all have autonomous cars, and they will have major implications for our transportation infrastructure. The autonomous-car challenge helped to bring that day closer. But will the day ever come when all, or even many, of us will want to pay large teams of people to find things for us?

(There’s more to be said about the general approach of offering challenge prizes as an alternative to traditional research funding, but that’s a topic for another day.)

Soghoian: 8 Million Reasons for Real Surveillance Oversight

If you’re interested at all in surveillance policy, go and read Chris Soghoian’s long and impassioned post today. Chris drops several bombshells into the debate, including an audio recording of a closed-door talk by Sprint/NexTel’s Electronic Surveillance Manager, bragging about how easy the company has made it for law enforcement to get customers’ location data — so easy that the company has serviced over eight million law enforcement requests for customer location data.

Here’s the juiciest quote:

“[M]y major concern is the volume of requests. We have a lot of things that are automated but that’s just scratching the surface. One of the things, like with our GPS tool. We turned it on the web interface for law enforcement about one year ago last month, and we just passed 8 million requests. So there is no way on earth my team could have handled 8 million requests from law enforcement, just for GPS alone. So the tool has just really caught on fire with law enforcement. They also love that it is extremely inexpensive to operate and easy, so, just the sheer volume of requests they anticipate us automating other features, and I just don’t know how we’ll handle the millions and millions of requests that are going to come in.

— Paul Taylor, Electronic Surveillance Manager, Sprint Nextel.

Chris has more quotes, facts, and figures, all implying that electronic surveillance is much more widespread that many of us had thought.

Probably, many of these surveillance requests were justified, in the sense that a fair-minded citizen would think their expected public benefit justified the intrusiveness. How many were justified, we don’t know. We can’t know — and that’s a big part of the problem.

It’s deeply troubling that this has happened without significant public debate or even much disclosure. We need to have a discussion — and quickly — about what the rules for electronic surveillance should be.

Tech Policy in the SkyMall Catalog

These days tech policy issues seem to pop up everywhere. During a recent flight delay, I was flipping through the SkyMall Catalog (“Holiday 2009” edition), and found tech policy even there.

There were lots of ads for surveillance and recording devices, some of them clearly useful for illegal purposes: the New Agent Cam HD Color Video Spy Camera (p. 14), the Original Agent Cam Color Video Spy Camera (p. 14), the Video Recording Sunglasses (p. 23), the Wireless Remote Controlled Pan and Tilt Surveillance Camera (p. 23), the Spy Pen (with hidden audio and video recorders, p. 42), the Orbitor Electronic Listening Device, and the GPS Tracking Key (p. 224)

There were also plenty of ads for media-copying technologies, of the sort that various copyright owners might find objectionable: the LP and Cassette to CD Recorder (p. 16), the Slide and Negative to Digital Picture Converter (p. 17), the Digital Photo to DVD Converter (p. 20), the Easy Ipod Media Sharer (p. 27), the One Step DVD/CD Duplicator (p. 31), the Photograph to Digital Picture Converter (p. 40), and the Crosley Encoding Turntable (converts LP records to MP3s, p. 179).

Are these things illegal? Probably not, I guess, but there are surely people out there who would want to make them illegal. And some of them are pretty good ways to get a tech policy debate started.

I’m not about to start reading the SkyMall catalog for fun. But it’s interesting to know that it offers more than just Slankets and yeti statues.

Robots and the Law

Stanford Law School held a panel Thursday on “Legal Challenges in an Age of Robotics“. I happened to be in town so I dropped by and heard an interesting discussion.

Here’s the official announcement:

Once relegated to factories and fiction, robots are rapidly entering the mainstream. Advances in artificial intelligence translate into ever-broadening functionality and autonomy. Recent years have seen an explosion in the use of robotics in warfare, medicine, and exploration. Industry analysts and UN statistics predict equally significant growth in the market for personal or service robotics over the next few years. What unique legal challenges will the widespread availability of sophisticated robots pose? Three panelists with deep and varied expertise discuss the present, near future, and far future of robotics and the law.

The key questions are how robots differ from past technologies, and how those differences change the law and policy issues we face.

Three aspects of robots seemed to recur in the discussion: robots take action that is important in the world; robots act autonomously; and we tend to see robots as beings and not just machines.

The last issue — robots as beings — is mostly a red herring for our purposes, notwithstanding its appeal as a conversational topic. Robots are nowhere near having the rights of a person or even of a sentient animal, and I suspect that we can’t really imagine what it would be like to interact with a robot that qualified as a conscious being. Our brains seem to be wired to treat self-propelled objects as beings — witness the surprising acceptance of robot “dogs” that aren’t much like real dogs — but that doesn’t mean we should grant robots personhood.

So let’s set aside the consciousness issue and focus on the other two: acting in the world, and autonomy. These attributes are already present in many technologies today, even in the purely electronic realm. Consider, for example, the complex of computers, network equipment, and software make up Google’s data centers. Its actions have significant implications in the real world, and it is autonomous, at least in the sense that the panelists seemed to using the term “autonomous” — it exhibits complex behavior without direct, immediate human instruction, and its behavior is often unpredictable even to its makers.

In the end, it seemed to me that the legal and policy issues raised by future robots will not be new in kind, but will just be extrapolations of the issues we’re already facing with today’s complex technologies — and not a far extrapoloation but more of a smooth progression from where we are now. These issues are important, to be sure, and I was glad to hear smart panelists debating them, but I’m not convinced yet that we need a law of the robot. When it comes to the legal challenges of technology, the future will be like the past, only more so.

Still, if talking about robots will get policymakers to pay more attention to important issues in technology policy, then by all means, let’s talk about robots.