December 23, 2024

Archives for 2008

A curious phone scam

My phone at work rings.  The caller ID has a weird number (“50622961841” – yes, it’s got an extra digit in it).  I answer.  It’s a recording telling me I can get lower rates on my card (what card?) if I just hit one to connect me to a representative.  Umm, okay.  “1”.  Recorded voiced: “Just a moment.”  Human voice: “Hello, card center.”

At this point, I was mostly thinking that this was unsolicited spam, not a phishing attack.  Either way, I knew I had a limited time to ask questions before they’d hang up. “Who is this?  What company is this?”  They hung up.  Damn! I should have played along a little further.  I imagine they would have asked for my credit card number.  I could have then made something up to see how far the interaction would go.  Oh well.

Clearly, this was a variant on a credit card phishing attack, except instead of an email from a Nigerian dictator, it was a phone call.  I’m sure the caller ID is total garbage, although that, along with the demon-dialer, says that the scammer has some non-trivial infrastructure in place to make it happen.

So, the next time one of you receives an unsolicited call offering to get you lower rates on your card, please do play along and feed them random numbers when they ask for data.  At the very least, there’s some entertainment value.  If you’re lucky, you might be able to learn something that would be useful to mount a criminal investigation.  Maybe half-way through you could suddenly have an important meeting to get to and see if you can get them to give you a callback phone number.

Update: reader “anon” points to an article from The Register that discusses this in more detail.

It can be rational to sell your private information cheaply, even if you value privacy

One of the standard claims about privacy is that people say they value their privacy but behave as if they don’t value it. The standard example involves people trading away private information for something of relatively little value. This argument is often put forth to rebut the notion that privacy is an important policy value. Alternatively, it is posed as a “what could they be thinking” puzzle.

I used to be impressed by this argument, but lately I have come to doubt its power. Let me explain why.

Suppose you offer to buy a piece of information about me, such as my location at this moment. I’ll accept the offer if the payment you offer me is more than the harm I would experience due to disclosing the information. What matters here is the marginal harm, defined as amount of privacy-goodness I would have if I withheld the information, minus the amount I would have if I disclosed it.

The key word here is marginal. If I assume that my life would be utterly private, unless I gave this one piece of information to you, then I might require a high price from you. But if I assume that I have very little privacy to start with, then selling this one piece of information to you makes little difference, and I might as well sell it cheaply. Indeed, the more I assume that my privacy is lost no matter what I do, the lower a price I’ll demand from you. In the limit, where I expect you can get the information for free elsewhere even if I withhold if from you, I’ll be willing to sell you the information for a penny.

Viewed this way, the price I charge you tells you at least as much about how well I think my privacy is protected, as it does about how badly I want to keep my location private. So the answer to “what could they be thinking” is “they could be thinking they have no privacy in the first place”.

And in case you’re wondering: At this moment, I’m sitting in my office at Princeton.

Come Join Us Next Spring

It’s been an exciting summer here at the Center for Information Technology Policy. On Friday, we’ll be moving into a brand new building. We’ll be roughly doubling our level of campus activity—lectures, symposia and other events—from last year. You’ll also see some changes to our online activities, including a new, expanded Freedom to Tinker that will be hosted by the Center and will feature an expanded roster of contributors.

One of our key goals is to recruit visiting scholars who can enrich, and benefit from, our community. We’ve already lined up several visitors for the coming year, and will welcome them soon. But we also have space for several more. With the generous support of Princeton’s Woodrow Wilson School and School of Engineering and Applied Sciences, we are able to offer limited support for visitors to join us on a semester basis in spring 2009. The announcement, available here, reads as follows:

CITP Seeks Visiting Faculty, Fellows or Postdocs for Spring 2009 Semester

The Center for Information Technology Policy (CITP) at Princeton University is seeking visiting faculty, fellows, or postdocs for the Spring 2009 semester.

About CITP

Digital technologies and public life are constantly reshaping each other—from net neutrality and broadband adoption, to copyright and file sharing, to electronic voting and beyond.

Realizing digital technology’s promise requires a constant sharing of ideas, competencies and norms among the technical, social, economic and political domains.

The Center for Information Technology Policy is Princeton University’s effort to meet this challenge. Its new home, opening in September 2008, is a state of the art facility designed from the ground up for openness and collaboration. Located at the intellectual and physical crossroads of Princeton’s engineering and social science communities, the Center’s research, teaching and public programs are building the intellectual and human capital that our technological future demands.

To see what this mission can mean in practice, take a look at our website, at http://citp.princeton.edu.

One-Term Visiting Positions in Spring 2009

The Center has secured limited resources from a range of sources to support visitors this coming spring. Visitors will conduct research, engage in public programs, and may teach a seminar during their appointment. They’ll play an important role at a pivotal time in the development of this new center. Visitors will be appointed to a visiting faculty or visiting fellow position, or a postdoctoral role, depending on qualifications.

We are happy to hear from anyone who works at the intersection of digital technology and public life. In addition to our existing strengths in computer science and sociology, we are particularly interested in identifying engineers, economists, lawyers, civil servants and policy analysts whose research interests are complementary to our existing activities. Levels of support and official status will depend on the background and circumstances of each appointee. Terms of appointment will be from February 1 until either July 1 or September 1 of 2009.

If you are interested, please email a letter of interest, stating background, intended research, and salary requirements, to David Robinson, Associate Director of the Center, at . Please include a copy of your CV.

Deadline: October 15, 2008.

Beyond this particular recruiting effort, there are other ways to get involved—interested students can apply for graduate study in the 2009-2010 school year, and we continue to seek out suitable candidates for externally-funded fellowships. More information about those options is here.

Cheap CAPTCHA Solving Changes the Security Game

ZDNet’s “Zero Day” blog has an interesting post on the gray-market economy in solving CAPTCHAs.

CAPTCHAs are those online tests that ask you to type in a sequence of characters from a hard-to-read image. By doing this, you prove that you’re a real person and not an automated bot – the assumption being that bots cannot decipher the CAPTCHA images reliably. The goal of CAPTCHAs is to raise the price of access to a resource, by requiring a small quantum of human attention, in the hope that legitimate human users will be willing to expend a little attention but spammers, password guessers, and other unwanted users will not.

It’s no surprise, then, that a gray market in CAPTCHA-solving has developed, and that that market uses technology to deliver CAPTCHAs efficiently to low-wage workers who solve many CAPTCHAs per hour. It’s no surprise, either, that there is vigorous competition between CAPTCHA-solving firms in India and elsewhere. The going rate, for high-volume buyers, seems to be about $0.002 per CAPTCHA solved.

I would happily pay that rate to have somebody else solve the CAPTCHAs I encounter. I see two or three CAPTCHAs a week, so this would cost me about twenty-five cents a year. I assume most of you, and most people in the developed world, would happily pay that much to never see CAPTCHAs. There’s an obvious business opportunity here, to provide a browser plugin that recognizes CAPTCHAs and outsources them to low-wage solvers – if some entrepreneur can overcome transaction costs and any legal issues.

Of course, the fact that CAPTCHAs can be solved for a small fee, and even that most users are willing to pay that fee, does not make CAPTCHAs useless. They still do raise the cost of spamming and other undesired behavior. The key question is whether imposing a $0.002 fee on certain kinds of accesses deters enough bad behavior. That’s an empirical question that is answerable in principle. We might not have the data to answer it in practice, at least not yet.

Another interesting question is whether it’s good public policy to try to stop CAPTCHA-solving services. It’s not clear whether governments can actually hinder CAPTCHA-solving services enough to raise the price (or risk) of using them. But even assuming that governments can raise the price of CAPTCHA-solving, the price increase will deter some bad behavior but will also prevent some beneficial transactions such as outsourcing by legitimate customers. Whether the bad behavior deterred outweighs the good behavior deterred is another empirical question we probably can’t answer yet.

On the first question – the impact of cheap CAPTCHA-solving – we’re starting a real-world experiment, like it or not.

Lenz Ruling Raises Epistemological Questions

Stephanie Lenz’s case will be familiar to many of you: After publishing a 29-second video on YouTube that shows her toddler dancing to the Prince song “Let’s Go Crazy,” Ms. Lenz received email from YouTube, informing her that the video was being taken down at Universal Music’s request. She filed a DMCA counter-notification claiming the video was fair use, and the video was put back up on the site. Now Ms. Lenz, represented by the EFF, is suing Universal, claiming that the company violated section 512(f) of the Digital Millennium Copyright Act. Section 512(f) creates liability for a copyright owner who “knowingly materially misrepresents… that material or activity is infringing.”

On Wednesday, the judge denied Universal’s motion to dismiss the suit. The judge held that “in order for a copyright owner to proceed under the DMCA with ‘a good faith belief that the use of the material in the manner complained of is not authorized by the copyright owner, its agent, or the law,’ the owner must evaluate whether the material makes fair use of the copyright.”

The essence of Lenz’s claim is that when Universal sent a notice claiming her use was “not authorized by… the law,” they already knew her use was actually lawful. She cites news coverage that suggests that Universal’s executives watched the video and then, at Prince’s urging, sent a takedown notice they would not have opted to send on their own. Wednesday’s ruling gives the case a chance to proceed into discovery, where Lenz and the EFF can try to find evidence to support their theory that Universal’s lawyers recognized her use was legally authorized under fair use—but caved to Prince’s pressure and sent a spurious notice anyway.

Universal’s view is very different from Lenz’s and, apparently, from the judge’s—they claim that the sense of “not authorized by… the law” required for a DMCA takedown notice is that a use is unauthorized in the first instance, before possible fair use defenses are considered. This position is very important to the music industry’s current practice of sending automated takedown notices based on recognizing copyright works; if copyright owners were required to form any kind of belief about the fairness of a use before asking for a takedown, then this kind of fully computer-automated mass request might not be possible, since it’s hard to imagine a computer performing the four-factor weighing test that informs a fair use determination.

Seen in this light, the case has at least as much to do with the murky epistemology of algorithmic inference as it does with fair use per se. The music industry uses takedown bots to search out and flag potentially infringing uses of songs, and then in at least some instances to send automated takedown notices. If humans at Universal manually review a random sample of the bot’s output, and the statistics and sampling issues are well handled, and they find that a certain fraction of the bot’s output is infringing material, then they can make an inference. They can infer with the statistically appropriate level of confidence that the same fraction of songs in a second sample, consisting of bot-flagged songs “behind a curtain” that have not manually reviewed, are also infringing. If the fraction of material that’s infringing is high enough—e.g. 95 percent?—then one can reasonably or in good faith (at least in the layperson, everyday sense of those terms) believe that an unexamined item turned up by the bot is infringing.

The same might hold true if fair use is also considered: As long a high enough fraction of the material flagged by the bot in the first, manual human review phase turns out to be infringement-not-defensible-as-fair-use, a human can believe reasonably that a given instance flagged by the bot—still “behind the curtain” and not seen by human eyes—is probably an instance of infringement-not-defensible-as-fair-use.

The general principle here would be: If you know the bot is usually right (for some definition of “usually”), and don’t have other information about some case X on which the bot has offered a judgment, then it is reasonable to believe that the bot is right in case X—indeed, it would be unreasonable to believe otherwise, without knowing more. So it seems like there is some level of discernment, in a bot, that would suffice in order for a person to believe in good faith that any given item identified by the bot was an instance of infringement suitable for a DMCA complaint. (I don’t know what the threshold should be, who should decide, or whether or not the industry’s current bots meet it.) This view, when it leads to auto-generated takedown requests, has the strange consequence that music industry representatives are asserting that they have a “good faith belief” certain copies of certain media are infringing, even when they aren’t aware that those copies exist.

Here’s where the sidewalk ends, and I begin to wish I had formal legal training: What are the epistemic procedures required to form a “good faith belief”? How about a “reasonable belief”? This kind of question in the law surely predates computers: It was Oliver Wendell Holmes, Jr. who first created the reasonable man, a personage Louis Menand has memorably termed “the fictional protagonist of modern liability theory.” I don’t even know to whom this question should be addressed: Is there a single standard nationally? Does it vary circuit by circuit? Statute by statute? Has it evolved in response to computer technology? Readers, can you help?