November 21, 2024

Archives for February 2006

Obligatory Summers Post

According to Section 4.3(c)(iii)(g) of the Code of Academic Blogging, I am required, on pain of banishment from the faculty club, to post about the departure of Lawrence Summers as Harvard president. Much e-ink has been spilled on this topic, and I for one feel no wiser for it. With some trepidation, let me offer a few thoughts.

I am the first to admit that I don’t know much about how to run Harvard, or about what kind of job Summmers did. I read the same press articles as everybody else, but in my experience outsiders commenting on university matters hugely overweight things they read in the newspapers, and underweight less flashy details of management. For example, I as a Princeton professor don’t much care what Princeton’s president thinks about the Iraq war, even though the local newpaper will print any offhand remark she makes on that topic. I care more about who she appointed to the committee to pick the dean of engineering, or about whether she wants to change the administrative status of sixth-year grad students. You’ll never read about those things in the newspaper.

So I’m pretty sure Summers wasn’t bounced because he dissed Cornel West and made some ill-considered remarks in a speech. Not having followed the details of Summers’ management of Harvard, I won’t pretend to know the detailed reasons for his ouster, or whether the Harvard Corporation showed good judgment in (apparently) deciding he should leave.

What is clear is that he is gone because the Corporation (what most other schools would call the Trustees or Regents) decided he should go. A faculty vote of no confidence last year had no real effect, and another one now would also not have mattered if the Corporation thought Summers was on the right track. If the faculty were involved in ousting Summers, their role was to convince the Corporation that Summers was doing a bad job.

Some commentators argue that the opinions of Harvard faculty shouldn’t matter. But even if Harvard faculty members know nothing about how Harvard should be run (which is pretty unlikely, if you ask me), it still matters what they think.

Consider a corporation run by a CEO who reports to a board of directors. If the majority of vice presidents think the CEO is doing a bad job, that should be a matter of concern for the board. They should talk to the CEO and the managers about what is happening. Then they should decide if people are unhappy because the CEO is making difficult but necessary decisions, or whether the CEO is just doing a bad job.

Maybe the CEO is doing an okay job at most things, but he seems to have a knack for angering and disappointing vice presidents. This is a problem for the company if it causes vice presidents to leave or makes it harder to recruit new ones. This problem is especially serious if other companies are eager to hire away vice presidents, and if the competence of the vice presidents is a big factor in the quality of the company’s output.

None of this depends on whether the vice presidents have the formal power to fire the CEO, or to do anything else for that matter. If employees make a difference in the company’s output and the labor market is competitive, the employees have power.

Which is why the call from some commentators to strip the faculty of their power is pointless. At most universities, the faculty have little or no formal power. All the Harvard Arts and Sciences faculty did was (a) pass non-binding resolutions, and (b) talk to people. To the extent they had power over the real decision makers, that power was granted not by Harvard but by the market. That is not something Harvard can change by amending its bylaws.

How Watermarks Fail

I wrote Wednesday about Randy Picker’s suggestion of using digital watermarks to embed users’ personal financial information into media files, to discourage users from sharing the files. Today, I want to talk more generally about watermarks and how they tend to fail.

First, some background. Watermarks are subtle signals embedded in the background of media files. They are supposed to be unobtrusive but easy to detect if you know where to look. Different media have different kinds of watermarks. In a photo, the watermark might be hidden in subtle patterns of shading. In music, it might be in a very soft background buzz, or a barely audible echo.

In many applications, a watermark must resist attempts by an adversary to remove it. For example, in Randy’s scheme, a user might want to remove the identifying watermark from a media file because he wants to share the file illegally, or because he doesn’t want his personal information exposed to cyber-intruders. It is often important to know how resistant a particular watermark is to removal. There has been plenty of research on this topic, from which we can draw lessons about how watermark removal tends to work.

One theme is the power of Rosetta Stone attacks. The original Rosetta Stone was a stone tablet with the same text written in three ancient languages. This gave scholars who understood one of the languages a big boost in deciphering another one that they didn’t understand. Similarly, watermarks tend to be defeated if an adversary can get his hands on a watermarked file, and the same file without the watermark. By comparing the two, the adversary can determine where the watermark lives, which is usually sufficient to remove the watermark from other files. Alex used this method in deciphering the MediaMax watermark (as described in our Sony CD DRM paper), and my colleagues and I used it also in analyzing the SDMI watermarks back in 2000.

Almost as powerful as a Rosetta Stone attack is a comparison attack, where the adversary does not have an unwatermarked file, but does have the same file with several different watermarks in it. Any place where two of the files differ is a place where watermark information lives. Given several marked files, an attacker can locate all or most of the places the watermark is hidden, which is again the first step in removing the watermark.

(In theory it might be possible to stop an adversary with access to a limited number of individually watermarked files from completely removing the watermark, if the watermark has lots of places to hide and is constructed cleverly. There is an interesting body of theory about how to do this and when it works. But in practice the assumptions underlying that theory rarely hold.)

Even if the adversary cannot get access to multiple versions of a file (so that Roseta Stone or comparison attacks are not possible), he can usually still defeat a watermark if he has access to a device that can detect watermarks. By reverse engineering the device, he can figure out where it is looking for the watermark, which again puts him in a position to remove it. (Even if he can’t dissect the device, he can use it as an oracle that tells him whether a particular file has a detectable watermark. Oracles are very helpful in attacking watermarks – Alex used one in his MediaMax watermark analysis, and my colleagues and I used one in our SDMI analysis.)

All of this helps us to understand where watermarks are likely to be effective and where they’re not. The best case for watermarking is where each file is published in a single version, with a watermark in a location that is not disclosed to the public and is not implemented in a device available to the public. This would hold true, for example, in a system that put a distinctive mark into all released versions of a file, and then looked for such watermarks in content broadcast on the radio or TV or downloaded from the net.

Not nearly as strong is a system where there is a single watermark per file, and consumer devices check for the mark – it is subject to reverse engineering and oracle attacks.

Weaker yet is a system where files are watermarked individually for each consumer – it is subject to comparison attacks.

Weakest of all is a system where files are watermarked individually for each consumer and everyone is told how to read the watermarks. Here the adversary can use comparison attacks, and reverse engineering is not even necessary because the inner working of the watermark detector are well known.

Alert readers will have noticed that all of the uses of watermarks for DRM (copy protection) seem to fall into the weak categories. That is because DRM applications require either that all devices check for the watermark – opening up reverse engineering and oracle attacks – or alternatively that a file be given separate watermarks for separate consumers – opening up comparison attacks. Watermarking has its uses, but it doesn’t seem well suited for DRM.

Mistrust-Based DRM

Randy Picker has an interesting post on the Chicago Law Faculty blog, describing what he calls “mistrust-based DRM”. The idea is that when an online music store gives you a song, it embeds into the song a watermark that contains your credit card number, or some other information that would let a (dishonest) person spend your money. This gives you an incentive not to distribute the song.

This is an instructive idea, but not a practical one.

In analyzing this idea, it’s helpful to divide it into two pieces: (1) embed a watermark that identifies the user, and (2) make that watermark a secret of the user and readable by the anyone who gets the file. Piece (1), taken alone, is a widely discussed DRM strategy which has not been used much in practice, for reasons I plan to discuss tomorrow. Today, I want to focus on the second piece.

Specifically, I want to compare two systems. In the more traditional system, the watermark is secret – it can be read only by the copyright owner or its agents – and users fear being sued for infringement if their files end up on P2P. In Randy’s system, the watermark is public – anybody can read it – and users fear being victimized by fraud if their files end up on P2P. I’ll call these two alternatives “secret-watermark” and “public-watermark”.

How do they compare? For starters, a secret watermark is much harder for an adversary to find and remove. If a watermark is public, everybody knows exactly where in the music it is stored. Common sense, and experience too, says that if you know where in a file information is stored, you can modify that part of the file and obliterate the information. But if the watermark is secret, then an adversary isn’t told where to look for it or how to change the file to remove it. Robustness of the watermark is an important issue that has been the downfall of past watermark systems.

A bigger problem with the public-watermark design, I think, are the forces unleashed when your design principle is to enable fraud. For example, the system will lose its force if unrelated anti-fraud measures become more effective, or if the financial system acts to protect users from fraud. Today, a consumer’s liability for fraudulent credit card transactions is capped at $50, and credit card companies often forgive even that $50. (You could use some other account information instead of the credit card number, but similar issues would still apply.) Copyright owners would be the only online merchants who wanted a higher level of fraud on the Net.

Worse yet, even law-abiding consumers would face a higher risk of fraud, because any loss or theft of their music or movie files would expose their financial information. Spyware programs could collect this information from users’ computers – and studies show that at least half of end-user PCs are infected with spyware. Law-abiding users would have a strong incentive to scrub the information out of their files, even if they had no intention of infringing. Alert anti-virus or anti-spyware vendors would be eager to provide this service.

Given the disadvantages of a public-watermark scheme, what are the arguments for it? Randy Picker argues that it gives end users an incentive to distrust fly-by-night purveyors of ripping software, worrying that they might steal the user’s information from the files and commit fraud. This isn’t entirely convincing: some such tools already contain heinous spyware that could cause users lots of harm, and reputable security suppliers are likely to provide watermark-scrubbing tools anyway. I think the threat of secret watermarks hidden in files, which fly-by-night vendors have no incentive to remove, would probably scare users enough.

On the whole, then, I think a secret-watermark scheme is better than a public-watermark one. But it should be noted that secret-watermark schemes themselves aren’t looking too good. They have mostly failed in the market, for reasons I’ll start digging into tomorrow.

Software Security: Creativity in a New Discipline

This is the last excerpt from my new book, Software Security: Building Security In. This might be a good time to buy the book.

Creativity in a New Discipline

We are experiencing a time of great creativity in computer security and must seize the opportunity presented by our current situation while we can. The diversity of backgrounds represented by today’s security practitioners may be a high-water mark. Consider that today’s security thought leaders were trained in fields as diverse as biostatistics, divinity, economics, and cognitive science, and thus bring with them interesting new perspectives on the security challenge. This leads to creative interplay in the field and has resulted in interesting progress, including the emergence of economic theories of security, an embrace of risk management, an emphasis on process-driven approaches (versus product sets), a shift toward software security, the rise of security engineering, and so on. As the worldwide security paradigm shift from guns, dogs, and concrete to networks, information systems, and computers continues unabated, we must leverage this time of creative diversity for all it’s worth.

A number of young researchers joined the computer security field in the mid-1990s, changing the focus of security research from spookware and national defense (think crypto, multilevel security, communications monitoring, and the like) to commercial systems and commerce. This movement away from military-oriented research was driven in part by the widespread public adoption of the Internet and the growing trend of e-commerce. With money at stake, security quickly became as relevant to business as it was to national defense. This influx of “new blood” shook up the scientific security research community and continues to have far-reaching effects that are only now affecting commercial security—the commercialization of firewalls, the rise of antivirus technology, and the adoption of modern security platforms, such as Java and .NET, were all predicted and spearheaded by new thinkers in the security research community.

Where Today’s Security People Come From

Only a handful of people working in computer security today started their careers in the field. In fact, academic programs expressly designed to train security practitioners are a recent phenomenon and remain rare.

Interestingly, it may be in this dearth of “qualified” people trained in security that a critical opportunity can be found. Though few practitioners have academic security training, they most assuredly do have academic training in some field of study. That means that as a collective, the computer security field is filled with diverse and interesting points of view. This is exactly the sort of Petri dish of ideas that led to the Renaissance at the end of the Dark Ages.

Diversity of ideas is healthy, and it lends a creativity and drive to the security field that we must take advantage of. A great example of this can be found in the new subfield of software security. Only five years ago the notion that bad software might be a major root cause of security issues was not common. Today, software security is the subject of keynote talks at the RSA security conference, and
we all seem to agree that we have a software problem to solve. This change was partially due to the involvement of programming languages people (once found only at obscure academic conferences like OOPSLA) in the security field. Such involvement resulted in the creation of modern languages like Java and .NET that include security models in their very design. When languages are declared “secure,” things get interesting! The evolutionary arms race between attackers and defenders jumps a level, new avenues for security design emerge, and dusty but thorny problems (think “buffer overflow”) become less relevant to the next generation of systems.

Where Tomorrow’s Security People Will Come From

These days, academic and professional training programs are being put in place to train the next generation of security professionals. Soon, standard curricula will be developed, and students will be required to understand the same core set of concepts. This will certainly help to solidify the field of computer security, but at the same time, there is a danger that generalization may lead to a homogenization of security. Instead of the creative soup afforded by a multiplicity of points of view spanning many fields, security runs the risk of becoming staid and static. If we are careful to avoid complete homogenization of the field, we can retain the benefits of diversity while building a solid academic discipline. One way to do this might be to encourage those students seeking computer security degrees to study widely in other supposedly unrelated disciplines as well. Another is to ensure that outside perspectives remain welcome in the field and are not dismissed out of hand. Computer security must remain an inclusive discipline in order to retain its creativity.

In any case, we must take advantage of the situation we find ourselves in now. Computer security is, in fact, experiencing an important rebirth, and now is the time to make great progress. We must pay close attention to different ideas, embrace change, and help security continue to evolve even as it begins to crystallize.

Software Security: A Case Study

Here is another excerpt from my new book, Software Security: Building Security In..

An Example: Java Card Security Testing

Doing effective security testing requires experience and knowledge. Examples and case studies like the one I present here are thus useful tools for understanding the approach.

In an effort to enhance payment cards with new functionality—such as the ability to provide secure cardholder identification or remember personal references—many credit-card companies are turning to multi-application smart cards. These cards use resident software applications to process and store thousands of times more information than traditional magnetic-stripe cards.

Security and fraud issues are critical concerns for the financial institutions and merchants spearheading smart-card adoption. By developing and deploying smart-card technology, credit-card companies provide important new tools in the effort to lower fraud and abuse. For instance, smart cards typically use a sophisticated crypto system to authenticate transactions and verify the identities of the cardholder and issuing bank. However, protecting against fraud and maintaining security and privacy are both very complex problems because of the rapidly evolving nature of smart-card technology.

The security community has been involved in security risk analysis and mitigation for Open Platform (now known as Global Platform, or GP) and Java Card since early 1997. Because product security is an essential aspect of credit-card companies’ brand protection regimen, companies like Visa and MasterCard spend plenty of time and effort on security testing and risk analysis. One central finding emphasizes the importance of testing particular vendor implementations according to our two testing categories: adherence to functional security design and proper behavior under particular attacks motivated by security risks.

The latter category, adversarial security testing (linked directly to risk analysis findings), ensures that cards can perform securely in the field even when under attack. Risk analysis results can be used to guide manual security testing. As an example, consider the risk that, as designed, the object-sharing mechanism in Java Card is complex and thus is likely to suffer from security-critical implementation errors on any given manufacturer’s card. Testing for this sort of risk involves creating and manipulating stored objects where sharing is involved. Given a technical description of this risk, building specific probing tests is possible.

Automating Security Testing

Over the years, Cigital has been involved in several projects that have identified architectural risks in the GP/Java Card platform, suggested several design improvements, and designed and built automated security tests for final products (each of which has multiple vendors).

Several years ago, we began developing an automated security test framework for GP cards built on Java Card 2.1.1 and based on extensive risk analysis results. The end result is a sophisticated test framework that runs with minimal human intervention and results in a qualitative security testing analysis of a sample smart card. This automated framework is now in use at MasterCard and the U.S. National Security Agency.

The first test set, the functional security test suite, directly probes low-level card security functionality. It includes automated testing of class codes, available commands, and crypto functionality. This test suite also actively probes for inappropriate card behavior of the sort that can lead to security compromise.

The second test set, the hostile applet test suite, is a sophisticated set of intentionally hostile Java Card applets designed to probe high-risk aspects of the GP on a Java Card implementation.

Results: Nonfunctional Security Testing Is Essential

Most cards tested with the automated test framework (but not all) pass all functional security tests, which we expect because smart-card vendors are diligent with functional testing (including security functionality). Because smart cards are complex embedded devices, vendors realize that exactly meeting functional requirements is an absolute necessity for customers to accept the cards. After all, they must perform properly worldwide.

However, every card submitted to the risk-based testing paradigm exhibited some manner of failure when tested with the hostile applet suite. Some failures pointed directly to critical security vulnerabilities on the card; others were less specific and required further exploration to determine the card’s true security posture.

As an example, consider that risk analysis of Java Card’s design documents indicates that proper implementation of atomic transaction processing is critical for maintaining a secure card. Java Card has the capability of defining transaction boundaries to ensure that if a transaction fails, data roll back to a pre-transaction state. In the event that transaction processing fails, transactions can go into any number of possible states, depending on what the applet was attempting. In the case of a stored-value card, bad transaction processing could allow an attacker to “print money” by forcing the card to roll back value counters while actually purchasing goods or services. This is called a “torn transaction” attack in credit-card risk lingo.

When creating risk-based tests to probe transaction processing, we directly exercised transaction-processing error handling by simulating an attacker attempting to violate a transaction—specifically, transactions were aborted or never committed, transaction buffers were completely filled, and transactions were nested (a no-no according to the Java Card specification). These tests were not based strictly on the card’s functionality—instead, security test engineers intentionally created them, thinking like an attacker given the results of a risk analysis.

Several real-world cards failed subsets of the transaction tests. The vulnerabilities discovered as a result of these tests would allow an attacker to terminate a transaction in a potentially advantageous manner—a critical test failure that wouldn’t have been uncovered under normal functional security testing. Fielding cards with these vulnerabilities would allow an attacker to execute successful attacks on live cards issued to the public. Because of proper risk-based security testing, the vendors were notified of the problems and corrected the code responsible before release.