October 12, 2024

Another Privacy Misstep from Facebook

Facebook is once again clashing with its users over privacy. As a user myself, I was pretty unhappy about the recently changed privacy control. I felt that Facebook was trying to trick me into loosening controls on my information. Though the initial letter from Facebook founder Mark Zuckerberg painted the changes as pro-privacy — which led more than 48,000 users to click the “I like this” button — the actual effect of the company’s suggested new policy was to allow more public access to information. Though the company has backtracked on some of the changes, problems remain.

Some of you may be wondering why Facebook users are complaining about privacy, given that the site’s main use is to publish private information about yourself. But Facebook is not really about making your life an open book. It’s about telling the story of your life. And like any autobiography, your Facebook-story will include a certain amount of spin. It will leave out some facts and will likely offer more and different levels of detail depending on the audience. Some people might not get to hear your story at all. For Facebook users, privacy means not the prevention of all information flow, but control over the content of their story and who gets to read it.

So when Facebook tries to monetize users’ information by passing that information along to third parties, such as advertisers, users get angry. That’s what happened two years ago with Facebook’s ill-considered Beacon initiative: Facebook started telling advertisers what you had done — telling your story to strangers. But perhaps even worse, Facebook sometimes added items to your wall about what you had purchased — editing your story, without your permission. Users revolted, and Facebook shuttered Beacon.

Viewed through this lens, Facebook’s business dilemma is clear. The company is sitting on an ever-growing treasure trove of information about users. Methods for monetizing this information are many and obvious, but virtually all of them require either telling users’ stories to third parties, or modifying users’ stories — steps that would break users’ mental model of Facebook, triggering more outrage.

What Facebook has, in other words, is a governance problem. Users see Facebook as a community in which they are members. Though Facebook (presumably) has no legal obligation to get users’ permission before instituting changes, it makes business sense to consult the user community before making significant changes in the privacy model. Announcing a new initiative, only to backpedal in the face of user outrage, can’t be the best way to maximize long-term profits.

The challenge is finding a structure that allows the company to explore new business opportunities, while at the same time securing truly informed consent from the user community. Some kind of customer advisory board seems like an obvious approach. But how would the members be chosen? And how much information and power would they get? This isn’t easy to do. But the current approach isn’t working either. If your business is based on user buy-in to an online community, then you have to give that community some kind of voice — you have to make it a community that users want to inhabit.

The Role of Worst Practices in Insecurity

These days, security advisors talk a lot about Best Practices: establishes procedures that are generally held to yield good results. Deploy Best Practices in your organization, the advisors say, and your security will improve. That’s true, as far as it goes, but often we can make more progress by working to eliminate Worst Practices.

A Worst Practice is something that most of us do, even though we know it’s a bad idea. One current Worst Practice is the way we use passwords to authenticate ourselves to web sites. Sites’ practices drive users to re-use the same password across many sites, and to expose themselves to phishing and keylogging attacks. We know we shouldn’t be doing this, but we keep doing it anyway.

The key to addressing Worst Practices is to recognize that they persist for a reason. If ignorance is the cause, it’s not a Worst Practice — remember that Worst Practices, by definition, are widely known to be bad. There’s typically some kind of collective action problem that sustains a Worst Practice, some kind of Gordian Knot that must be cut before we can eliminate the practice.

This is clearly true for passwords. If you’re building a new web service, and you’re deciding how to authenticate your users, passwords are the easy and obvious choice. Users understand them; they don’t require coordination with any other company; and there’s probably a password-handling module that plugs right into your development environment. Better authentication will be a “maybe someday” feature. Developers make this perfectly rational choice every day — and so we’re stuck with a Worst Practice.

Solutions to this and other Worst Practices will require leadership by big companies. Google, Microsoft, Facebook and others will have to step up and work together to put better practices in place. In the user authentication space we’re seeing some movement with new technologies such as OpenID which reduce the number of places users must log into, thereby easing the move to better practices. But on this and other Worst Practices, we have a long way to go.

Which Worst Practices annoy you? And what can be done to address them? Speak up in the comments.

DARPA Pays MIT to Pay Someone Who Recruited Someone Who Recruited Someone Who Recruited Someone Who Found a Red Balloon

DARPA, the Defense Department’s research arm, recently sponsored a “Network Challenge” in which groups competed to find ten big red weather balloons that were positioned in public places around the U.S. The first team to discover where all the balloons were would win $40,000.

A team from MIT won, using a clever method of sharing the cash with volunteers. MIT let anyone join their team, and they paid money to the members who found balloons, as well as the people who recruited the balloon-finders, and the people who recruited the balloon-finder-finders. For example, if Alice recruited Bob, and Bob recruited Charlie, and Charlie recruited Diane, and Diane found a balloon, then Alice would get $250, Bob would get $500, Charlie would get $1000, and Diane would get $2000. Multi-level marketing meets treasure hunting! It’s the Amway of balloon-hunting!

On DARPA’s side, this was inspired by the famous Grand Challenge and Urban Challenge, in which teams built autonomous cars that had to drive themselves safely through a desert landscape and then a city.

The autonomous-car challenges have obvious value, both for the military and in ordinary civilian life. But it’s hard to say the same for the balloon-hunting challenge. Granted, the balloon-hunting prize was much smaller, but it’s still hard to avoid the impression that the balloon hunt was more of a publicity stunt than a spur to research. We already knew that the Internet lets people organize themselves into effective groups at a distance. We already knew that people like a scavenger hunt, especially if you offer significant cash prizes. And we already knew that you can pay Internet strangers to do jobs for you. But how are we going to apply what we learned in the balloon hunt?

The autonomous-car challenge has value because it asks the teams to build something that will eventually have practical use. Someday we will all have autonomous cars, and they will have major implications for our transportation infrastructure. The autonomous-car challenge helped to bring that day closer. But will the day ever come when all, or even many, of us will want to pay large teams of people to find things for us?

(There’s more to be said about the general approach of offering challenge prizes as an alternative to traditional research funding, but that’s a topic for another day.)

Tinkering with Disclosed Source Voting Systems

As Ed pointed out in October, Sequoia Voting Systems, Inc. (“Sequoia”) announced then that it intended to publish the source code of their voting system software, called “Frontier”, currently under development. (Also see EKR‘s post: “Contrarianism on Sequoia’s Disclosed Source Voting System”.)

Yesterday, Sequoia made good on this promise and you can now pull the source code they’ve made available from their Subversion repository here:
http://sequoiadev.svn.beanstalkapp.com/projects/

Sequoia refers to this move in it’s release as “the first public disclosure of source code from a voting systems manufacturer”. Carefully parsed, that’s probably correct: there have been unintentional disclosures of source code (e.g., Diebold in 2003) and I know of two other voting industry companies that have disclosed source code (VoteHere, now out of business, and Everyone Counts), but these were either not “voting systems manufacturers” or the disclosures were not available publicly. Of course, almost all of the research systems (like VoteBox and Helios) have been truly open source. Groups like OSDV and OVC have released or will soon release voting system source code under open source licenses.

I wrote a paper ages ago (2006) on the use of open and disclosed source code for voting systems and I’m surprised at how well that analysis and set of recommendations has held up (the original paper is here, an updated version is in pages 11–41 of my PhD thesis).

The purpose of my post here is to highlight one point of that paper in a bit of detail: disclosed source software licenses need to have a few specific features to be useful to potential voting system evaluators. I’ll start by describing three examples of disclosed source software licenses and then talk about what I’d like to see, as a tinkerer, in these agreements.

Soghoian: 8 Million Reasons for Real Surveillance Oversight

If you’re interested at all in surveillance policy, go and read Chris Soghoian’s long and impassioned post today. Chris drops several bombshells into the debate, including an audio recording of a closed-door talk by Sprint/NexTel’s Electronic Surveillance Manager, bragging about how easy the company has made it for law enforcement to get customers’ location data — so easy that the company has serviced over eight million law enforcement requests for customer location data.

Here’s the juiciest quote:

“[M]y major concern is the volume of requests. We have a lot of things that are automated but that’s just scratching the surface. One of the things, like with our GPS tool. We turned it on the web interface for law enforcement about one year ago last month, and we just passed 8 million requests. So there is no way on earth my team could have handled 8 million requests from law enforcement, just for GPS alone. So the tool has just really caught on fire with law enforcement. They also love that it is extremely inexpensive to operate and easy, so, just the sheer volume of requests they anticipate us automating other features, and I just don’t know how we’ll handle the millions and millions of requests that are going to come in.

— Paul Taylor, Electronic Surveillance Manager, Sprint Nextel.

Chris has more quotes, facts, and figures, all implying that electronic surveillance is much more widespread that many of us had thought.

Probably, many of these surveillance requests were justified, in the sense that a fair-minded citizen would think their expected public benefit justified the intrusiveness. How many were justified, we don’t know. We can’t know — and that’s a big part of the problem.

It’s deeply troubling that this has happened without significant public debate or even much disclosure. We need to have a discussion — and quickly — about what the rules for electronic surveillance should be.