August 24, 2016

Archives for December 2007


Three Down, One to Go: Warner Music to Sell MP3s

Warner Music will sell music through Amazon’s online store without DRM (copy protection) technology, according to a New York Times story by Jeff Leeds. This is a big step for Warner, given that earlier this year Warner CEO Edgar Bronfman said that selling MP3s would be “completely without logic or merit.”

The next question is whether Warner will make a deal with Apple to sell MP3s on iTunes too. The NYT article says Warner plans to do so, but the LA Times implies the opposite. The two other majors that sell MP3s are split on this point, with EMI selling MP3s through multiple stores including iTunes, and Universal Music selling MP3s through other online stores but refusing to do so through iTunes. Is Warner willing to inconvenience its customers in order to undercut Apple?

By the way, the Times article makes a simple but common mistake, in saying that “the industry faces increasing pressure to bolster digital music sales as its traditional business — selling CDs — suffers a sharp decline.” CDs are digital too, and they lack DRM (attempts to add DRM to CDs failed disastrously), but news stories and commentary often ignore these facts. I guess “Warner to adopt another DRM-free digital format” wouldn’t seem quite so newsworthy.

Three of the four majors (all but SonyBMG) now sell MP3s. It’s only a matter of time before the last domino falls, and the industry can move on to the next stage in its evolution.


Obama's Digital Policy

The Iowa caucuses, less than a week away, will kick off the briefest and most intense series of presidential primaries in recent history. That makes it a good time to check in on what the candidates are saying about digital technologies. Between now and February 5th (the 23-state tsunami of primaries that may well resolve the major party nominations), we’ll be taking a look.

First up: Barack Obama. A quick glance at the sites of other candidates suggests that Obama is an outlier – none of the other major players has gone into anywhere near the level of detail that he has in their official campaign output. That may mean we’ll be tempted to spend a disproportionate amount of time talking about him – but if so, I guess that’s the benefit he reaps by paying attention. Michael Arrington’s TechCrunch tech primary provides the best summary I’ve found, compiled from other sources, of candidates’ positions on tech issues, and we may find ourselves relying on that over the next few weeks.

For Obama, we have a detailed “Technology and Innovation” white paper. It spans a topical area that Europeans often refer to as ICTs – information and communications technologies. That means basically anything digital, plus the analog ambit of the FCC (media concentration, universal service and so on). Along the way, other areas get passing mention – immigration of high tech workers, trade policy, energy efficiency.

Net neutrality may be the most talked about tech policy issue in Washington – it has generated a huge amount of constituent mail, perhaps as many as 600,000 constituent letters. Obama is clear on this: He says requiring ISPs to provide “accurate and honest information about service plans” that may violate neutrality is “not enough.” He wants a rule to stop network operators from charging “fees to privilege the content or applications of some web sites and Internet applications over others.” I think that full transparency about non-neutral Internet service may indeed be enough, an idea I first got from a comment on this blog, but in any case it’s nice to have a clear statement of view.

Where free speech collides with child protection, Obama faces the structural challenge, common to Democrats, of simultaneously appeasing both the entertainment industry and concerned moms. Predictably, he ends up engaging in a little wishful thinking:

On the Internet, Obama will require that parents have the option of receiving parental controls software that not only blocks objectionable Internet content but also prevents children from revealing personal information through their home computer.

The idealized version of such software, in which unwanted communications are stopped while desirable ones remain unfettered, is typically quite far from what the technology can actually provide. The software faces a design tradeoff between being too broad, in which case desirable use is stopped, and too narrow, in which case undesirable online activity is permitted. That might be why Internet filtering software, despite being available commercially, isn’t already ubiquitous. Given that parents can already buy it, Obama’s aim to “require that parents have the option of receiving” such software sounds like a proposal for the software to be subsidized or publicly funded; I doubt that would make it better.

On privacy, the Obama platform again reflects a structural problem. Voters seem eager for a President who will have greater concern for statutory law than the current incumbent does. But some of the secret and possibly illegal reductions of privacy that have gone on at the NSA and elsewhere may actually (in the judgment of those privy to the relevant secrets) be indispensable. So Obama, like many others, favors “updating surveillance laws.” He’ll follow the law, in other words, but first he wants it modified so that it can be followed without unduly tying his hands. That’s very likely the most reasonable kind of view a presidential candidate could have, but it doesn’t tell us how much privacy citizens will enjoy if he gets his way. The real question, unanswered in this platform, is exactly which updates Obama would favor. He himself is probably reserving judgment until, briefed by the intelligence community, he can competently decide what updates are needed.

My favorite part of the document, by far, is the section on government transparency. (I’d be remiss were I not to shamelessly plug the panel on exactly this topic at CITP’s upcoming January workshop.) The web is enabling amazing new levels, and even new kinds, of sunlight to accompany the exercise of public power. If you haven’t experienced MAPlight, which pairs campaign contribution data with legislators’ votes, then you should spend the next five minutes watching this video. Josh Tauberer, who launched, has pointed out that one major impediment to making these tools even better is the reluctance of government bodies to adopt convenient formats for the data they publish. A plain text page (typical fare on existing government sites like THOMAS) meets the letter of the law, but an open format with rich metadata would see the same information put to more and better use.

Obama’s stated position is to make data available “online in universally accessible formats,” a clear nod in this direction. He also calls for live video feeds of government proceedings. One more radical proposal, camoflaged among these others, is

…pilot programs to open up government decision-making and involve the public in the work of agencies, not simply by soliciting opinions, but by tapping into the vast and distributed expertise of the American citizenry to help government make more informed decisions.

I’m not sure what that means, but it sounds exciting. If I wanted to start using wikis to make serious public policy decisions – and needed to make the idea sound simple and easy – that’s roughly how I might put it.


The Return of 3-D Movies

[Today’s guest post is by longtime reader and commenter Mitch Golden. Thanks, Mitch! If you’re a Freedom to Tinker reader and have a great idea for a guest post, please let me know. – Ed]

Last Friday I was at a movie preview for a concert movie called U23D, which, as you will correctly surmise, was a U2 concert filmed in digital 3D.

A few weeks ago I saw the new film Beowulf, also in 3D.

As I look out the office window to the AMC Loews on 84th St, I see that the marquee is already pitching Hannah Montana 3d, not due out until February.

And outside that same theater is a 3d movie poster for the upcoming Speed Racer movie.

Suddenly everything is floating in space, after decades of flatness. What gives?

Those of us who frequent Freedom To Tinker know that there are two approaches for producers operating in our world of nearly-zero-cost copying. The option most often pursued thus far by the content industries has been to pin hope on a technological fix – DRM – and then use political muscle to get governments around the world to mandate its use. Thus far this strategy can only be said to have been pretty much a total train wreck for all the parties involved – from the record industry to Microsoft – and it has had the disastrous side effect (from their point of view) of persuading an entire generation – and then some – that the media companies are “the man” and so file sharing is not immoral.

Of course the other option – thus far being resisted strenuously by the record labels – is to try a new business model. Sell the customers something better than what they can get for free. Maybe – just maybe – that’s what’s going on here.

As you doubtless know, there’s nothing new about 3d movie or photos. In fact, they go back nearly to the very beginning of photography. To make the 3d effect work, you just need to present different images, shot from slightly different perspectives, to the two eyes. While various systems have been invented over the years to do this (see the wikipedia page on the subject for a bit of the history of the technology), they all to a greater or lesser extent shared the common faults that (a) the theater had to install special equipment (including a more expensive screen that reflects polarized light without depolarizing it), (b) the film was bigger and more difficult to handle, and (c) splicing the film print when it broke required careful treatment to avoid getting the two eyes out of sync. So it just wasn’t quite worth it.

So why are we seeing these movies again now? One possibility is that the explanation for the renaissance of 3d is just that digital technology solves some of these problems (especially b and c), and so filmmakers are interested in trying again.

However, I think it’s possible there’s something else going on. Could it have something to do with the fact that a 3d movie cannot be pirated?

According to IMDB, the LA premier of Beowulf was on November 5, 2007 and the film was officially released in the US on November 16. On the other hand, according to vcdquality (a news site that announces the “releases” of films into various darknets) it was already available for file sharing by November 15.

Isn’t it just possible that the studios were thinking: Hey guys, I know you could just download this fantasy flick and see it on your widescreen monitor. But unless you give us $11 and sit in a dark theater with the polarized glasses, you won’t be seeing the half-naked Angelina Jolie literally popping off the screen!

Maybe the studios have learned something after all.


The "…and Technology" Debate

When an invitation to the facebook group came along, I was happy to sign up as an advocate of ScienceDebate 2008, a grassroots effort to get the Presidential candidates together for a group grilling on, as the web site puts it, “what may be the most important social issue of our time: Science and Technology.”

Which issues, exactly, would the debate cover? The web site lists seventeen, ranging from pharmaceutical patents to renewable energy to stem cells to space exploration. Each of the issues mentioned is both important and interesting, but the list is missing something big: It doesn’t so much as touch on digital information technologies. Nothing about software patents, the future of copyright, net neutrality, voting technology, cybersecurity, broadband penetration, or other infotech policy questions. The web site’s list of prominent supporters for the proposal – rich with Nobel laureates and university presidents, our own President Tilghman among them – shares this strange gap. It only includes one computer-focused expert, Peter Norvig of Google.

Reading the site reminded me of John McCain’s recent remark, (captured in a Washington Post piece by Garrett Graff) that the minor issues he might delegate to a vice-president include “information technology, which is the future of this nation’s economy.” If information technology really is so important, then why doesn’t it register as a larger blip on the national political radar?

One theory would be that, despite their protestations to the contrary, political leaders do not understand how important digital technology is. If they did understand, the argument might run, then they’d feel more motivated to take positions. But I think the answer lies elsewhere.

Politicians, in their perennial struggle to attract voters, have to take into account not only how important an issue actually is, but also how likely it is to motivate voting decisions. That’s why issues that make a concrete difference to a relatively small fraction of the population, such as flag burning, can still emerge as important election themes if the level of voter emotion they stir up is high enough. Tech policy may, in some ways, be a kind of opposite of flag burning: An issue that is of very high actual importance, but relatively low voting-decision salience.

One reason tech policy might tend to punch below its weight, politically, is that many of the most important tech policy questions turn on factual, rather than normative, grounds. There is surprisingly wide and surprisingly persistent reluctance to acknowledge, for example, how insecure voting machines actually are, but few would argue with the claim that extremely insecure voting machines ought not to be used in elections.

On net neutrality, to take another case, those who favor intervention tend to think that a bad outcome (with network balkanization and a drag on innovators) will occur under a laissez-faire regime. Those who oppose intervention see a different but similarly negative set of consequences occurring if regulators do intervene. The debate at its most basic level isn’t about the goodness or badness of various possible outcomes, but is instead about the relative probabilities that those outcomes will happen. And assessing those probabilities is, at least arguably, a task best entrusted to experts rather than to the citizenry at large.

The reason infotech policy questions tend to recede in political contexts like the science debate, in other words, is not that their answers matter less. It’s that their answers depend, to an unusual degree, on technical fact rather than on value judgment.


Computing in the Cloud, January 14-15 in Princeton

The agenda for our workshop on the social and policy implications of “Computing in the Cloud” is now available, along with information about how to register (for free). We have a great lineup of speakers, with panels on “Possession and ownership of data“, “Security and risk in the cloud“, “Civics in the cloud“, and “What’s next“. The workshop is organized by the Center for InfoTech Policy at Princeton, and sponsored by Microsoft.

Don’t miss it!


Ohio Study: Scariest E-Voting Security Report Yet

The State of Ohio released the report of a team of computer scientists it commissioned to study the state’s e-voting systems. Though it’s a stiff competition, this may qualify as the scariest e-voting study report yet.

This was the most detailed study yet of the ES&S iVotronic system, and it confirmss the results of the earlier Florida State study. The study found many ways to subvert ES&S systems.

The ES&S system, like its competitors, is subject to viral attacks that can spread from one voting machine to others, and to the central vote tabulation systems.

Anyone with access to a machine can re-calibrate the touchscreen to affect how the machine records votes (page 50):

A terminal can be maliciously re-calibrated (by a voter or poll worker) to prevent voting for certain candidates or to cause voter input for one candidate to be recorded for another.

Worse yet, the system’s access control can be defeated by a poll worker or an ordinary voter, using only a small magnet and a PDA or cell phone (page 50).

Some administrative functions require entry of a password, but there is an undocumented backdoor function that lets a poll worker or voter with a magnet and PDA bypass the password requirements (page 51).

The list of problems goes on and on. It’s inconceivable that the iVotronic could have undergone any kind of serious security review before being put on the market. It’s also unclear how the machine managed to get certified.

Even if you don’t think anyone would try to steal an election, this should still scare you. A machine with so many design errors must also be susceptible to misrecording or miscounting votes due to the ordinary glitches and errors that always plague computer systems. Even if all poll workers and voters were angels, this machine would be too risky to use.

This is yet more evidence that today’s paperless e-voting machines can’t be trusted.

[Correction (December 18): I originally wrote that this was the first independent study of the iVotronic. In fact, the Florida State team studied the iVotronic first and reported many problems. The new report confirms the Florida State report, and provides some new details. My apologies to the Florida State team for omitting their work.]


Joining Princeton's InfoTech Policy Center

The Center for InfoTech Policy at Princeton will have space next year to host visiting scholars. If you’re interested, see the announcement.


Lessons from Facebook's Beacon Misstep

Facebook recently beat a humiliating retreat from Beacon, its new system for peer-based advertising, in the face of users’ outrage about the system’s privacy implications. (When you bought or browsed products on certain third-party sites, Beacon would show your Facebook friends what you had done.)

Beacon was a clever use of technology and might have brought Facebook significant ad revenue, but it seemed a pretty obvious nonstarter from users’ point of view. Trying to deploy it, especially without a strong opt-out capability, was a mistake. On the theory that mistakes are often instructive, let’s take a few minutes to work through possible lessons from the Beacon incident.

To start, note that this wasn’t a privacy accident, where user data is leaked because of a bug, procedural breakdown, or treacherous employee. Facebook knew exactly what it was doing, and thought it was making a good business decision. Facebook obviously didn’t foresee their users’ response to Beacon. Though the money – not to mention the chance to demonstrate business model innovation – must have been a powerful enticement, the decision to proceed with Beacon could only have made sense if the company thought a strong user backlash was unlikely.

Organizations often have trouble predicting what will cause privacy outrage. The classic example is the U.S. government’s now-infamous Total Information Awareness program. TIA’s advocates in the government were honestly surprised when the program’s revelation caused a public furor. This wasn’t just public posturing. I still remember a private conversation I had with a TIA official who ridiculed my suggestion that the program might turn out to be controversial. This blindness contributed to the program’s counterproductive branding such as the creepy all-seeing-eye logo. Facebook’s error was similar, though of much smaller magnitude.

Of course, privacy is not the only area where organizations misjudge their clients’ preferences. But there does seem to be something about privacy that makes these sorts of errors more common.

What makes privacy different? I’m not entirely certain, but since I owe you at least a strawman answer, let me suggest some possibilities.

(1) Overlawyerization: Organizations see privacy as a legal compliance problem. They’re happy as long as what they’re doing doesn’t break the law; so they do something that is lawful but foolish.

(2) Institutional structure: Privacy is spun off to a special office or officer so the rest of the organization doesn’t have to worry about it; and the privacy office doesn’t have the power to head off mistakes.

(3) Treating privacy as only a PR problem: Rather than asking whether its practices are really acceptable to clients, the organization does what it wants and then tries to sell its actions to clients. The strategy works, until angry clients seize control of the conversation.

(4) Undervaluing emotional factors: The organization sees a potential privacy backlash as “only” an emotional response, which must take a backseat to more important business factors. But clients might be angry for a reason; and in any case they will act on their anger.

(5) Irrational desire for control: Decisionmakers like to feel that they’re in control of client interactions. Sometimes they insist on control even when it would be rational to follow the client’s lead. Where privacy is concerned, they want to decide what clients should want, rather than listening to what clients actually do want.

Perhaps the underlying cause is the complex and subtle nature of privacy. We agree that privacy matters, but we don’t all agree on its contours. It’s hard to offer precise rules for recognizing a privacy problem, but we know one when we see it. Or t least we know it after we’ve seen it.


Universal Didn't Ignore Digital, Just Did It Wrong

Techies have been chortling all week about comments made by Universal Music CEO Doug Morris to Wired’s Seth Mnookin. Morris, despite being in what is now a technology-based industry, professed extreme ignorance about the digital world. Here’s the money quote:

Morris insists there wasn’t a thing he or anyone else could have done differently. “There’s no one in the record company that’s a technologist,” Morris explains. “That’s a misconception writers make all the time, that the record industry missed this. They didn’t. They just didn’t know what to do. It’s like if you were suddenly asked to operate on your dog to remove his kidney. What would you do?”

Personally, I would hire a vet. But to Morris, even that wasn’t an option. “We didn’t know who to hire,” he says, becoming more agitated. “I wouldn’t be able to recognize a good technology person — anyone with a good bullshit story would have gotten past me.” Morris’ almost willful cluelessness is telling. “He wasn’t prepared for a business that was going to be so totally disrupted by technology,” says a longtime industry insider who has worked with Morris. “He just doesn’t have that kind of mind.”

Morris’s explanation isn’t just pathetic, it’s also wrong. The problem wasn’t that the company had no digital strategy. They had a strategy, and they had technologists on the payroll who were supposed to implement it. But their strategy was a bad one, combining impractical copy-protection schemes with locked-down subscription services that would appeal to few if any customers.

The most interesting side of the story is that Universal’s strategy is improving now – they’re selling unencumbered MP3s, for example – even though the same proud technophobe is still in charge.

Why the change?

The best explanation, I think, is a fear that Apple would use its iPod/iTunes technologies to grab control of digital music distribution. If Universal couldn’t quite understand the digital transition, it could at least recognize a threat to its distribution channel. So it responded by competing – that is, trying to give customers what they wanted.

Still, if I were a Universal shareholder I wouldn’t let Morris off the hook. What kind of manager, in an industry facing historic disruption, is uninterested in learning about the source of that disruption? A CEO can’t be an expert on everything. But can’t the guy learn just a little bit about technology?


Latest voting system analysis from California

This summer, the California Secretary of State commissioned a first-ever “Top to Bottom Review” of all the electronic voting systems used in the state. In August, the results of the first round of review were published, finding significant security vulnerabilities and a variety of other problems with the three vendors reviewed at the time. (See the Freedom to Tinker coverage for additional details.) The ES&S InkaVote Plus system, used in Los Angeles County, wasn’t included in this particular review. (The InkaVote is apparently unrelated to the ES&S iVotronic systems used elsewhere in the U.S.) The reports on InkaVote are now public.

(Disclosure: I was a co-author of the Hart InterCivic source code report, released by the California Secretary of State in August. I was uninvolved in the current round of investigation and have no inside information about this work.)

First, it’s worth a moment to describe what InkaVote is actually all about.  It’s essentially a precinct-based optical-scan paper ballot system, with a template-like device, comparable to the Votomatic punch-card systems.  As such, even if the tabulation computers are completely compromised, the paper ballots remain behind with the potential for being retabulated, whether mechanically or by hand.

The InkaVote reports represent work done by a commercial firm, atsec, whose primary business is performing security evaluation against a variety of standards, such as FIPS-140 or the ISO Common Criteria. The InkaVote reports are quite short (or, at least the public reports are short). In effect, we only get to see the high-level bullet-points rather than detailed explanations of what they found. Furthermore, their analysis was apparently compressed to an impossible two week period, meaning there are likely to be additional issues that exist but were not discovered by virtue of the lack of time. Despite this, we still get a strong sense of how vulnerable these systems are.

From the source code report:

The documentation provided by the vendor does not contain any test procedure description; rather, it provides only a very abstract description of areas to be tested. The document mentions test cases and test tools, but these have not been submitted as part of the TDP and could not be considered for this review. The provided documentation does not show evidence of “conducting of tests at every level of the software structure”. The TDP and source code did not contain unit tests, or any evidence that the modules were developed in such a way that program components were tested in isolation. The vendor documentation contains a description of cryptographic algorithms that is inconsistent with standard practices and represented a serious vulnerability. No vulnerability assessment was made as part of the documentation review because the attack approach could not be identified based on the documentation alone. (The source review identified additional specific vulnerabilities related to encryption).

This is consistent, for better or for worse, with what we’ve seen from the other vendors.  Given that, security vulnerabilities are practically a given. So, what kinds of vulnerabilities were found?

In the area of cryptography and key management, multiple potential and actual vulnerabilities were identified, including inappropriate use of symmetric cryptography for authenticity checking (A.8), use of a very weak homebrewed cipher for the master key algorithm (A.7), and key generation with artificially low entropy which facilitates brute force attacks (A.6). In addition, the code and comments indicated that a hash (checksum) method that is suitable only for detecting accidental corruption is used inappropriately with the claimed intent of detecting malicious tampering. The Red Team has demonstrated that due to the flawed encryption mechanisms a fake election definition CD can be produced that appears genuine, see Red Team report, section A.15.

106 instances were identified of SQL statements embedded in the code with no evidence of sanitation of the data before it is added to the SQL statement. It is considered a bad practice to build the SQL statements at runtime; the preferred method is to use predefined SQL statements using bound variables. A specific potential vulnerability was found and documented in A.10, SQL Injection.

Ahh, lovely (or, I should say, oy gevaldik). Curiously, the InkaVote tabulation application appears to have been written in Java – a good thing, because it eliminates the possibility of buffer overflows. Nonetheless, writing this software in a “safe” language is insufficient to yield a secure system.

The reviewer noted the following items as impediments to an effective security analysis of the system:

  • Lack of design documentation at appropriate levels of detail.
  • Design does not use privilege separation, so all code in the entire application is potentially security critical.
  • Unhelpful or misleading comments in the code.
  • Potentially complex data flow due to exception handling.
  • Subjectively, large amount of source code compared to the functionality implemented.

The code constructs used were generally straightforward and easy to follow on a local level. However, the lack of design documentation made it difficult to globally analyze the system.

It’s clear that none of the voting system vendors that have been reviewed so far have had the engineering mandate (or the engineering talent) to build secure software systems that are suitably designed to resist threats that are reasonable to expect in an election setting. Instead, these vendors have produced systems that are “good enough” to sell, relying on external tamper-resistance mechanisms and human procedures. The Red Team report offers some insight into the value of these kinds of mitigations:

In the physical security testing, the wire and tamper proof paper seals were easily removed without damage to the seals using simple household chemicals and tools and could be replaced without detection (Ref item A.1 in the Summary Table). The tamper proof paper seals were designed to show evidence of removal and did so if simply peeled off but simple household solvents could be used to remove the seal unharmed to be replaced later with no evidence that it had been removed. Once the seals are bypassed, simple tools or easy modifications to simple tools could be used to access the computer and its components (Ref A.2 in summary). The key lock for the Transfer Device was unlocked using a common office item without the special ‘key’ and the seal removed. The USB port may then be used to attach a USB memory device which can be used in as part of other attacks to gain control of the system. The keyboard connector for the Audio Ballot unit was used to attach a standard keyboard which was then used to get access to the operating system (Ref A.10 in Summary) without reopening the computer.

The seal used to secure the PBC head to the ballot box provided some protection but the InkaVote Plus Manual (UDEL) provides instructions for installing the seal that, if followed, will allow the seal to be opened without breaking it (Ref A.3 in the Summary Table). However, even if the seals are attached correctly, there was enough play and movement in the housing that it was possible to lift the PBC head unit out of the way and insert or remove ballots (removal was more difficult but possible). [Note that best practices in the polling place which were not considered in the security test include steps that significantly reduce the risk of this attack succeeding but this weakness still needs to be rectified.]

I’ll leave it as an exercise to the reader to determine what the “household solvents” or “common office item” must be.