December 29, 2024

Applications and Appliances: A Conversation with Jonathan Zittrain

Professor Jonathan Zittrain is well-known for his concern that the general-purpose computer may be disappearing. The recent rise of app stores is putting his fears in a new light. After trading some thoughts about the issues in the blogosphere, he and I sat down at our respective keyboards for a conversation about the future of computing. This is a lightly edited version of our exchange.

JG: I suppose the place to start is with your concern about “appliances”: single-purpose devices like the TiVo. What’s wrong with boxes that do one thing and do it well?

JZ: Nothing’s inherently wrong with single-purpose devices. The worry comes when we lose the general-purpose devices formerly known as the PC and replace it with single-purpose devices and “curated” general-purpose devices.

JG: In the last few years, the appliance has taken on a new face, thanks to downloadable apps. An appliance with an app store is no longer just a single-purpose device: it can do all kinds of things. But that, you’ve argued, doesn’t really fix the fundamental problem.

JZ: It may look like the best of both worlds, but I worry it’s the worst of both worlds.

JG: I wanted to focus on your critique of the Mac App Store. This one is interesting because it sells programs that run, not on a closed device like the iPhone but on a traditional, general-purpose computer. The day that Apple activated the Mac App Store, it didn’t reduce the Mac’s generativity one iota. Every Mac in the world was just as capable of doing everything it used to be able to do, just as easily. All Apple added was a new way to install programs: so they made the Mac even easier to use, without reducing its power. But you’re skeptical. Why?

JZ: Let’s see how much of an advantage a developer sees from having an app in the App Store vs. “sideloaded,” even on a Mac OS that doesn’t require jailbreaking for sideloading. To the extent that users are looking to the App Store for their wares, it’s a de facto limit on generativity even if not a literal one. But I agree that the real worry is if Mac OS should become routinely configured not to allow sideloading at all.

JG: So let’s take up some of the countervailing arguments. One that’s high on a lot of people’s lists is the idea that an app store is more secure because it’s more tightly controlled. And of coure, security is the major reasons you cite in your book The Future of the Internet for why computer makers and users may be tempted to turn their back on open, generative systems. What do you think the Mac App Store does for security, if anything?

JZ: As a security measure, I give the Mac App Store three out of five stars. That’s because the software it is likely to turn away is more gray market sludge ­ sloppily written or poorly documented ­ than outright badware. There’s nothing to stop a software developer from registering under a cat’s paw, especially to offer a free app, and then build a bomb into the app .­ It could appear exemplary while Apple tests it and users then use it, until a designated H-hour at which point all bets are off.

JG: I might not be so quick to dismiss the security benefits. We know that Apple does run static analysis tools against iOS App Store submissions. And then there’s sandboxing. Regular programs have substantially free run of the computer, but Mac App Store programs are severely restricted in what they can see and do. It’s as if they’re playing safely with soft rubber toys in a glass-encased sandbox: your solitaire game isn’t going to suddenly overwrite your spreadsheets. Doesn’t that have some significant security benefits?

JZ: Sandboxing can prevent some damage from an app bound and determined to wreak havoc, but sandboxing is a phenomenon independent of the App Store: Mac OS could implement it with or without Apple screening the software up front.

JG: True. But sandboxing and Apple’s code review go together. The code review ensures that programs are placed in the smallest appropriate sandbox for their needs. Apple will only let the application have permissions if really needs them to do its job: there’s no reason for a stock ticker to save files to arbitrary places. Without the up-front review, how many developers would voluntarily agree to play only in the sandbox?

JZ: The real question at the intersection of security and freedom is whether the user has an opportunity to choose to override the sandbox’s boundaries. If the user can’t do it, then a bunch of functionality is foreclosed unless Apple chooses to allow it ­ and Apple can be fooled as easily as anyone else by a truly bad actor. If the user can do it, there’s no particular need for the App Store.

JG: This is a question about routine practice and interface design. If I rarely need to override the sandbox’s limits, then when an app comes to me and asks for additional privileges, my eyebrows are more likely to go up.

JZ: Don’t forget that Apple reserves the right not only to prevent software distributions up front, but also retroactively: software can be removed from machines that have already downloaded it. Perhaps helpful in some limited cases of security troubles, but all the more troublesome as regulators realize that cats can be put back into bags.

JG: Well, if we’re thinking about retroactive nuking, Apple has shown that it can uninstall even user-installed programs. After the Mac Defender malware started tricking Mac users into installing it, Apple came out with an operating system update that uninstalled it. Yes, Apple gave users a dialog box with a choice, but technologically, there’s no reason it had to. Do you see a difference between this and the Mac App Store?

JZ: Only in how this evolves our conception of code and who “owns” it: if the app lives in the cloud, our expectations are that it’s a service, and a service can change from day to day. If it’s on our own machines we feel like we own it, and look skeptically — and vendors tread carefully — over attempts to modify it without clearing it with us first.

JG: How much of this is about the fact that this is Apple’s app store we’re talking about? Do you feel differently about app stores that aren’t offered by the same company that controls the hardware and the operating system? So take something like Valve’s highly successful Steam, which is basically an app store for games. It runs on both Windows and Mac, and it handles all of the payment and DRM for the game developers.

JZ: I worry less if there’s not vertical integration, but there’s still a concern if, through natural monopoly, we end up with a single gatekeeper or a mere handful of them. Hence Facebook’s platform as a worry, despite (or because of!) it being not tied to any one OS or browser.

JG: I’d like to bring in an idea from your book: “Red” and “Green” PCs. Your computer would have two “virtual machines,” which couldn’t easily affect each other. The Green one would be for important data and would only run software you were confident in; the Red one would be easy to reset back to a safe point.

JZ: Well, as I say in the book:

“Someone could confidently store important data on the Green PC and still use the Red PC for experimentation. Knowing which virtual PC to use would be akin to knowing when a sport utility vehicle should be placed into four-wheel drive mode instead of two-wheel drive, a decision that mainstream users could learn to make responsibly and knowledgeably.”

JG: I read that and thought it sounded like a good idea. And it was pretty much the first thing I thought of when Apple announced the Mac App Store. Everything you install manually is like the Red PC; everything you install from the Mac App Store is like the Green PC. You have a safe mode for greater security, and an unsafe mode for greater generativity. Since you’re a fan of the Red/Green hybrid between open and closed, why not the Mac App Store hybrid?

JZ: The Red PC isn’t the same as a sandbox. Software developers in a Red/Green environment still only write one piece of code, and it doesn’t have to be otherwise vetted. The whole point of the red zone is to contain any bad effects of iffy code. The point of a sandbox is to mitigate the risks of iffy code, by limiting its functionality outright. This is a subtle but important point. The Mac App Store with a sandbox requirement means that a competent, legitimate developer who wants to do things beyond the sandbox either has to plead a special case or write two versions of the code: one for the Store and one not for the Store.

JG: Can’t this argument be turned back against the Red/Green model? The competent, legitimate developer who wants to write code that indexes and optimally compresses your Word documents needs to plead a special case to whoever controls the green certification. She doesn’t even have the choice to write both red and green versions of her code.

JZ: My conception of the green model is not that it’s guarded by a third party, but that the user gets to place iffy apps into a place where, if they blow up, stuff in the green zone doesn’t get hurt.

JG: I keep coming back to the fact that participation in the Mac App Store is voluntary. And this isn’t just voluntary in the sense that participation in the iOS App Store is “voluntary” because no one held a gun to your head and forced you to write iPhone games. You have no good alternative to the iOS App Store if you want your app to run on an iPhone, but you can perfectly easily write, sell, and distribute software that users install on Macs in the time-honored fashion: clicking on an installer or dragging an icon into the Applications folder. How can adding the Mac App Store as an additional option be a net loss?

JZ: Well, that’s the question. If sideloading is trivial, I’m in your corner. But one wonders why any developer would take the 30% hit in profits to distribute through the App Store if he or she could put it on a Web site and sell it through sideloading. (And, when did the front become the side?!)

JG: Is this really a case against truly voluntary app stores? Put another way, should we be digging in to prevent Apple from offering the Mac App Store, or should we be digging in to prevent Apple from turning off the ability to install programs manually?

JZ: I see it more as a spectrum than a dichotomy. Compare the Mac App Store with a program that provided an Apple Good Housekeeping seal for good code. They’re functionally the same, but wildly different in practice thanks to the power of the default.


Jonathan Zittrain is a Professor of Law at Harvard Law School and the author of The Future of the Internet: And How to Stop It

James Grimmelmann is an Associate Professor at New York Law School.

Corruption Bureau assigns fox to guard henhouse

Recently I wrote about my discovery that someone erased evidence on an election computer in Cumberland County, NJ. After something went wrong in a Primary Election in June 2011, the Superior Court (the Hon. David E. Krell) had ordered the County Board of Elections to make the computer available for me (the Plaintiffs’ expert) to examine.

When I examined the computer on August 17, among those watching me were the County Administrator of Elections (Lizbeth Hernandez), the Director of the New Jersey Division of Elections (Robert Giles), and a Deputy Attorney General of the State of New Jersey (George Cohen). This is quite a lot of firepower for reviewing a rather small election (43 votes cast in total).

In my examination of the computer, I noticed that files and logs were erased on the day before. I notified the Court, and within a few days an IT specialist employed by the county wrote, in an affidavit, that he had been asked by the County Administrator of Elections to examine the computer the day before my own examination, and at that time he erased the files and cleared the logs.

We do not know exactly what motivated Ms. Hernandez to ask the IT specialist to fiddle with the computer. The IT specialist himself says “I was asked by Lizbeth Hernandez to determine the date the hardening process was applied to the laptop.” Why is this date important? Back in 2010, a different judge of the Superior Court (the Hon. Linda R. Feinberg) had ordered the State to secure the computers used in conduction elections by applying these “hardening guidelines.” Mr. Giles was the one responsible for making sure the State (and all its Counties) complied with this order, more than a year ago. In August 2011, did Mr. Giles ask Ms. Hernandez whether the “hardening guidelines” had been applied? Perhaps these election officials were concerned that I might discover something about late compliance, or noncompliance, with Judge Feinberg’s order.

That is, the IT specialist’s affidavit points to concern about whether Mr. Giles had effectively brought New Jersey (including Cumberland County) into compliance; by erasing the logs and temporary files, he erased evidence about compliance or noncompliance.

Judge Krell, down in Cumberland County, does not like people tampering with evidence in the cases that come before him. On September 9 he referred the possible evidence-tampering to the prosecutor, that is, to the NJ Attorney General’s office. As I described in “Will the NJ Attorney General Investigate the NJ Attorney General,” the Plaintiffs doubted that the AG would do a real investigation.

Judge Krell’s referral was directed to Christine Hoffman, Chief of the Corruption Bureau of the Office of the Attorney General. On September 20, 2011, Ms. Hoffman wrote in an official letter, “the Division of Criminal Justice will not pursue criminal charges at this time. This matter is being forwarded to your office for your review and whatever action you deem appropriate.”

And to whom is this letter addressed? To Mr. Robert Giles, Director, Division of Elections. This is like asking the fox to investigate whether proper security measures have been installed at the henhouse. Does this instill confidence in the integrity of elections in New Jersey?

Plaintiffs have asked that Judge Krell assign a special master to investigate all irregularities associated with the June 8, 2011 primary election, including the erasure of the information concerning hardening guidelines. The recent turn of events shows why an independent investigation should take place in Cumberland County.

Did NJ election officials fail to respect court order to improve security of elections?

Part 2 of 4
The Gusciora case was filed in 2004 by the Rutgers Constitutional Litigation Clinic on behalf of Reed Gusciora and other public-interest plaintiffs. The Plaintiffs sought to end the use of paperless direct-recording electronic voting machines, which are very vulnerable to fraud and manipulation via replacement of their software. The defendant was the Governor of New Jersey, and as governors came and went it was variously titled Gusciora v. McGreevey, Gusciora v. Corzine, Guscioria v. Christie.

In 2010 Judge Linda Feinberg issued an Opinion. She did not ban the machines, but ordered the State to implement several kinds of security measures: some to improve the security of the computers on which ballots are programmed (and results are tabulated), and some to improve the security of the computers inside the voting machines themselves.

The Plaintiffs had shown evidence that ballot-programming computers (the so-called “WinEDS laptops”) in Union County had been used to surf the Internet even on election day in 2008. This, combined with many other security vulnerabilities in the configuration of Microsoft Windows, left the computers open to intrusion by outsiders, who could then interfere with and manipulate the programming of ballots before their installation on the voting machines, or manipulate the aggregation of results after the elections. Judge Feinberg also heard testimony that so-called “Hardening Guidelines”, which had previously been prepared by Sequoia Voting Systems at the request of the State of California, would help close some of these vulnerabilities. Basically, one wipes the hard drive clean on the “WinEDS laptop”, installs a fresh copy of Microsoft Windows, runs a script to shut down Internet access and generally tighten the Windows security configuration, and finally installs a fresh copy of the WinEDS ballot software. The Court also heard testimony (from me) that installing these Guidelines requires experience in Windows system administration, and would likely be beyond the capability of some election administrators.

Among the several steps the Court ordered in 2010 was the installation of these Hardening Guidelines on every WinEDS ballot-programming computer used in public elections, within 120 days.

Two years after I testified in the Gusciora case, I served as an expert witness in a different case, Zirkle v. Henry, in a different Court, before Judge David Krell. I wanted to determine whether an anomaly in the June 2011 Cumberland County primary election could have been caused by an intruder from the Internet, or whether such intrusion could reasonably be ruled out. Thus, the question became relevant of whether Cumberland County’s WinEDS laptop was in compliance with Judge Feinberg’s Order. That is, had the Hardening Guidelines been installed before the ballot programming was done for the election in question? If so, what would the event logs say about the use of that machine as the ballot cartridges were programmed?

One of the components of the Hardening Guidelines is to turn on certain Event Logs in the Windows operating system. So, during my examination of the WinEDS laptop on August 17, I opened the Windows Event Viewer and photographed screen-shots of the logs. To my surprise, the logs commenced on the afternoon of August 16, 2011, the day before my examination. Someone had wiped the logs clean, at the very least, or possibly on August 16 someone had wiped the entire hard drive clean in installing the Hardening Guidelines. In either case, evidence in a pending court case–files on a computer that the State of New Jersey and County of Cumberland had been ordered to produce for examination–was erased. I’m told that evidence-tampering is a crime. In an affidavit dated August 24, Jason Cossaboon, a Computer Systems Analyst employed by Cumberland County, stated that he erased the event logs on August 16.

Robert Giles, Director of the New Jersey Division of Elections, was present during my examination on August 17. Mr. Giles submitted to Judge David Krell an affidavit dated August 25 describing the steps he had taken to achieve compliance with Judge Feinberg’s Order. He writes, “The Sequoia hardening manual was sent, by email, to the various county election offices on March 29, 2010. To my knowledge, the hardening process was completed by the affected counties by the required deadline of June 1, 2010.” Mr. Giles does not say anything about how he acquired the “knowledge” that the process was completed.

Mr. Giles was present in Judge Feinberg’s courtroom in 2009 when I testified that the Hardening Guidelines are not simple to install and would typically require someone with technical training or experience. And yet he then pretended to discharge the State’s duty of compliance with Judge Feinberg’s Order by simply sending a mass e-mail to county election officials. Judge Feinberg herself said that sending an e-mail was not enough; a year later, Mr. Giles has done nothing more. In my opinion, this is disrespectful to the Court, and to the voters of New Jersey.

DigiNotar Hack Highlights the Critical Failures of our SSL Web Security Model

This past week, the Dutch company DigiNotar admitted that their servers were hacked in June of 2011. DigiNotar is no ordinary company, and this was no ordinary hack. DigiNotar is one of the “certificate authorities” that has been entrusted by web browsers to certify to users that they are securely connecting to web sites. Without this certainty, users could have their communications intercepted by any nefarious entity that managed to insert itself in the network between the user and the web site they seek to reach.

It appears that DigiNotar did not deserve to be trusted with the responsibility to to issue certifying SSL certificates, because their systems allowed an outside hacker to break in and issue himself certificates for any web site domain he wished. He did so, for dozens of domain names. This included domains like *.google.com and www.cia.gov. Anyone with possession of these certificates and control over the network path between you and the outside world could, for example, view all of your traffic to Gmail. The attacker in this case seems to be the same person who similarly compromised certificate-issuing servers for the company Comodo back in March. He has posted a new manifesto, and he claims to have compromised four other certificate authorities. All signs point to the conclusion that this person is an Iranian national who supports the current regime, or is a member of the regime itself.

The Comodo breach was deeply troubling, and the DigiNotar compromise is far worse. First, this new break-in affected all of DigiNotar’s core certificate servers as opposed to Comodo’s more contained breach. Second, this afforded the attacker with the ability of issuing not only baseline “domain validated” certificates but also higher-security “extended validation” certificates and even special certificates used by the Dutch government to secure itself (see the Dutch government’s fact sheet on the incident). However, this damage was by no means limited to the Netherlands, because any certificate authority can issue certificates for any domain. The third difference when compared to the Comodo breach is that we have actual evidence of these certificates being deployed against users in the real world. In this case, it appears that they were used widely against Iranian users on many different Iranian internet service providers. Finally, and perhaps most damning for DigiNotar, the break-in was not detected for a whole month, and was then not disclosed to the public for almost two more months (see the timeline at the end of this incident report by Fox-IT). The public’s security was put at risk and browser vendors were prevented from implementing fixes because they were kept in the dark. Indeed, DigiNotar seems to have intended never to disclose the problem, and was only forced to do so after a perceptive Iranian Google user noticed that their connections were being hijacked.

The most frightening thing about this episode is not just that a particular certificate authority allowed a hacker to critically compromise its operations, or that the company did not disclose this to the affected public. More fundamentally, it reminds us that our web security model is prone to failure across the board. As I noted at the time of the Comodo breach:

I recently spoke on the subject at USENIX Security 2011 as part of the panel “SSL/TLS Certificates: Threat or Menace?” (video and audio here if you scroll down to Friday at 11:00 a.m., and slides here.)

"You Might Also Like:" Privacy Risks of Collaborative Filtering

Ann Kilzer, Arvind Narayanan, Ed Felten, Vitaly Shmatikov, and I have released a new research paper detailing the privacy risks posed by collaborative filtering recommender systems. To examine the risk, we use public data available from Hunch, LibraryThing, Last.fm, and Amazon in addition to evaluating a synthetic system using data from the Netflix Prize dataset. The results demonstrate that temporal changes in recommendations can reveal purchases or other transactions of individual users.

To help users find items of interest, sites routinely recommend items similar to a given item. For example, product pages on Amazon contain a “Customers Who Bought This Item Also Bought” list. These recommendations are typically public, and they are the product of patterns learned from all users of the system. If customers often purchase both item A and item B, a collaborative filtering system will judge them to be highly similar. Most sites generate ordered lists of similar items for any given item, but some also provide numeric similarity scores.

Although item similarity is only indirectly related to individual transactions, we determined that temporal changes in item similarity lists or scores can reveal details of those transactions. If you’re a Mozart fan and you listen to a Justin Bieber song, this choice increases the perceived similarity between Justin Bieber and Mozart. Because similarity lists and scores are based on perceived similarity, your action may result in changes to these scores or lists.

Suppose that an attacker knows some of your past purchases on a site: for example, past item reviews, social networking profiles, or real-world interactions are a rich source of information. New purchases will affect the perceived similarity between the new items and your past purchases, possibility causing visible changes to the recommendations provided for your previously purchased items. We demonstrate that an attacker can leverage these observable changes to infer your purchases. Among other things, these attacks are complicated by the fact that multiple users simultaneously interact with a system and updates are not immediate following a transaction.

To evaluate our attacks, we use data from Hunch, LibraryThing, Last.fm, and Amazon. Our goal is not to claim privacy flaws in these specific sites (in fact, we often use data voluntarily disclosed by their users to verify our inferences), but to demonstrate the general feasibility of inferring individual transactions from the outputs of collaborative filtering systems. Among their many differences, these sites vary dramatically in the information that they reveal. For example, Hunch reveals raw item-to-item correlation scores, but Amazon reveals only lists of similar items. In addition, we examine a simulated system created using the Netflix Prize dataset. Our paper outlines the experimental results.

While inference of a Justin Bieber interest may be innocuous, inferences could expose anything from dissatisfaction with a job to health issues. Our attacks assume that a victim reveals certain past transactions, but users may publicly reveal certain transactions while preferring to keep others private. Ultimately, users are best equipped to determine which transactions would be embarrassing or otherwise problematic. We demonstrate that the public outputs of recommender systems can reveal transactions without user knowledge or consent.

Unfortunately, existing privacy technologies appear inadequate here, failing to simultaneously guarantee acceptable recommendation quality and user privacy. Mitigation strategies are a rich area for future work, and we hope to work towards solutions with others in the community.

Worth noting is that this work suggests a risk posed by any feature that adapts in response to potentially sensitive user actions. Unless sites explicitly consider the data exposed, such features may inadvertently leak details of these underlying actions.

Our paper contains additional details. This work was presented earlier today at the 2011 IEEE Symposium on Security and Privacy. Arvind has also blogged about this work.