November 27, 2024

The Return of 3-D Movies

[Today’s guest post is by longtime reader and commenter Mitch Golden. Thanks, Mitch! If you’re a Freedom to Tinker reader and have a great idea for a guest post, please let me know. – Ed]

Last Friday I was at a movie preview for a concert movie called U23D, which, as you will correctly surmise, was a U2 concert filmed in digital 3D.

A few weeks ago I saw the new film Beowulf, also in 3D.

As I look out the office window to the AMC Loews on 84th St, I see that the marquee is already pitching Hannah Montana 3d, not due out until February.

And outside that same theater is a 3d movie poster for the upcoming Speed Racer movie.

Suddenly everything is floating in space, after decades of flatness. What gives?

Those of us who frequent Freedom To Tinker know that there are two approaches for producers operating in our world of nearly-zero-cost copying. The option most often pursued thus far by the content industries has been to pin hope on a technological fix – DRM – and then use political muscle to get governments around the world to mandate its use. Thus far this strategy can only be said to have been pretty much a total train wreck for all the parties involved – from the record industry to Microsoft – and it has had the disastrous side effect (from their point of view) of persuading an entire generation – and then some – that the media companies are “the man” and so file sharing is not immoral.

Of course the other option – thus far being resisted strenuously by the record labels – is to try a new business model. Sell the customers something better than what they can get for free. Maybe – just maybe – that’s what’s going on here.

As you doubtless know, there’s nothing new about 3d movie or photos. In fact, they go back nearly to the very beginning of photography. To make the 3d effect work, you just need to present different images, shot from slightly different perspectives, to the two eyes. While various systems have been invented over the years to do this (see the wikipedia page on the subject for a bit of the history of the technology), they all to a greater or lesser extent shared the common faults that (a) the theater had to install special equipment (including a more expensive screen that reflects polarized light without depolarizing it), (b) the film was bigger and more difficult to handle, and (c) splicing the film print when it broke required careful treatment to avoid getting the two eyes out of sync. So it just wasn’t quite worth it.

So why are we seeing these movies again now? One possibility is that the explanation for the renaissance of 3d is just that digital technology solves some of these problems (especially b and c), and so filmmakers are interested in trying again.

However, I think it’s possible there’s something else going on. Could it have something to do with the fact that a 3d movie cannot be pirated?

According to IMDB, the LA premier of Beowulf was on November 5, 2007 and the film was officially released in the US on November 16. On the other hand, according to vcdquality (a news site that announces the “releases” of films into various darknets) it was already available for file sharing by November 15.

Isn’t it just possible that the studios were thinking: Hey guys, I know you could just download this fantasy flick and see it on your widescreen monitor. But unless you give us $11 and sit in a dark theater with the polarized glasses, you won’t be seeing the half-naked Angelina Jolie literally popping off the screen!

Maybe the studios have learned something after all.

The "…and Technology" Debate

When an invitation to the facebook group came along, I was happy to sign up as an advocate of ScienceDebate 2008, a grassroots effort to get the Presidential candidates together for a group grilling on, as the web site puts it, “what may be the most important social issue of our time: Science and Technology.”

Which issues, exactly, would the debate cover? The web site lists seventeen, ranging from pharmaceutical patents to renewable energy to stem cells to space exploration. Each of the issues mentioned is both important and interesting, but the list is missing something big: It doesn’t so much as touch on digital information technologies. Nothing about software patents, the future of copyright, net neutrality, voting technology, cybersecurity, broadband penetration, or other infotech policy questions. The web site’s list of prominent supporters for the proposal – rich with Nobel laureates and university presidents, our own President Tilghman among them – shares this strange gap. It only includes one computer-focused expert, Peter Norvig of Google.

Reading the site reminded me of John McCain’s recent remark, (captured in a Washington Post piece by Garrett Graff) that the minor issues he might delegate to a vice-president include “information technology, which is the future of this nation’s economy.” If information technology really is so important, then why doesn’t it register as a larger blip on the national political radar?

One theory would be that, despite their protestations to the contrary, political leaders do not understand how important digital technology is. If they did understand, the argument might run, then they’d feel more motivated to take positions. But I think the answer lies elsewhere.

Politicians, in their perennial struggle to attract voters, have to take into account not only how important an issue actually is, but also how likely it is to motivate voting decisions. That’s why issues that make a concrete difference to a relatively small fraction of the population, such as flag burning, can still emerge as important election themes if the level of voter emotion they stir up is high enough. Tech policy may, in some ways, be a kind of opposite of flag burning: An issue that is of very high actual importance, but relatively low voting-decision salience.

One reason tech policy might tend to punch below its weight, politically, is that many of the most important tech policy questions turn on factual, rather than normative, grounds. There is surprisingly wide and surprisingly persistent reluctance to acknowledge, for example, how insecure voting machines actually are, but few would argue with the claim that extremely insecure voting machines ought not to be used in elections.

On net neutrality, to take another case, those who favor intervention tend to think that a bad outcome (with network balkanization and a drag on innovators) will occur under a laissez-faire regime. Those who oppose intervention see a different but similarly negative set of consequences occurring if regulators do intervene. The debate at its most basic level isn’t about the goodness or badness of various possible outcomes, but is instead about the relative probabilities that those outcomes will happen. And assessing those probabilities is, at least arguably, a task best entrusted to experts rather than to the citizenry at large.

The reason infotech policy questions tend to recede in political contexts like the science debate, in other words, is not that their answers matter less. It’s that their answers depend, to an unusual degree, on technical fact rather than on value judgment.

Computing in the Cloud, January 14-15 in Princeton

The agenda for our workshop on the social and policy implications of “Computing in the Cloud” is now available, along with information about how to register (for free). We have a great lineup of speakers, with panels on “Possession and ownership of data“, “Security and risk in the cloud“, “Civics in the cloud“, and “What’s next“. The workshop is organized by the Center for InfoTech Policy at Princeton, and sponsored by Microsoft.

Don’t miss it!

Joining Princeton's InfoTech Policy Center

The Center for InfoTech Policy at Princeton will have space next year to host visiting scholars. If you’re interested, see the announcement.

Lessons from Facebook's Beacon Misstep

Facebook recently beat a humiliating retreat from Beacon, its new system for peer-based advertising, in the face of users’ outrage about the system’s privacy implications. (When you bought or browsed products on certain third-party sites, Beacon would show your Facebook friends what you had done.)

Beacon was a clever use of technology and might have brought Facebook significant ad revenue, but it seemed a pretty obvious nonstarter from users’ point of view. Trying to deploy it, especially without a strong opt-out capability, was a mistake. On the theory that mistakes are often instructive, let’s take a few minutes to work through possible lessons from the Beacon incident.

To start, note that this wasn’t a privacy accident, where user data is leaked because of a bug, procedural breakdown, or treacherous employee. Facebook knew exactly what it was doing, and thought it was making a good business decision. Facebook obviously didn’t foresee their users’ response to Beacon. Though the money – not to mention the chance to demonstrate business model innovation – must have been a powerful enticement, the decision to proceed with Beacon could only have made sense if the company thought a strong user backlash was unlikely.

Organizations often have trouble predicting what will cause privacy outrage. The classic example is the U.S. government’s now-infamous Total Information Awareness program. TIA’s advocates in the government were honestly surprised when the program’s revelation caused a public furor. This wasn’t just public posturing. I still remember a private conversation I had with a TIA official who ridiculed my suggestion that the program might turn out to be controversial. This blindness contributed to the program’s counterproductive branding such as the creepy all-seeing-eye logo. Facebook’s error was similar, though of much smaller magnitude.

Of course, privacy is not the only area where organizations misjudge their clients’ preferences. But there does seem to be something about privacy that makes these sorts of errors more common.

What makes privacy different? I’m not entirely certain, but since I owe you at least a strawman answer, let me suggest some possibilities.

(1) Overlawyerization: Organizations see privacy as a legal compliance problem. They’re happy as long as what they’re doing doesn’t break the law; so they do something that is lawful but foolish.

(2) Institutional structure: Privacy is spun off to a special office or officer so the rest of the organization doesn’t have to worry about it; and the privacy office doesn’t have the power to head off mistakes.

(3) Treating privacy as only a PR problem: Rather than asking whether its practices are really acceptable to clients, the organization does what it wants and then tries to sell its actions to clients. The strategy works, until angry clients seize control of the conversation.

(4) Undervaluing emotional factors: The organization sees a potential privacy backlash as “only” an emotional response, which must take a backseat to more important business factors. But clients might be angry for a reason; and in any case they will act on their anger.

(5) Irrational desire for control: Decisionmakers like to feel that they’re in control of client interactions. Sometimes they insist on control even when it would be rational to follow the client’s lead. Where privacy is concerned, they want to decide what clients should want, rather than listening to what clients actually do want.

Perhaps the underlying cause is the complex and subtle nature of privacy. We agree that privacy matters, but we don’t all agree on its contours. It’s hard to offer precise rules for recognizing a privacy problem, but we know one when we see it. Or t least we know it after we’ve seen it.