April 20, 2024

Software Security: A Case Study

Here is another excerpt from my new book, Software Security: Building Security In..

An Example: Java Card Security Testing

Doing effective security testing requires experience and knowledge. Examples and case studies like the one I present here are thus useful tools for understanding the approach.

In an effort to enhance payment cards with new functionality—such as the ability to provide secure cardholder identification or remember personal references—many credit-card companies are turning to multi-application smart cards. These cards use resident software applications to process and store thousands of times more information than traditional magnetic-stripe cards.

Security and fraud issues are critical concerns for the financial institutions and merchants spearheading smart-card adoption. By developing and deploying smart-card technology, credit-card companies provide important new tools in the effort to lower fraud and abuse. For instance, smart cards typically use a sophisticated crypto system to authenticate transactions and verify the identities of the cardholder and issuing bank. However, protecting against fraud and maintaining security and privacy are both very complex problems because of the rapidly evolving nature of smart-card technology.

The security community has been involved in security risk analysis and mitigation for Open Platform (now known as Global Platform, or GP) and Java Card since early 1997. Because product security is an essential aspect of credit-card companies’ brand protection regimen, companies like Visa and MasterCard spend plenty of time and effort on security testing and risk analysis. One central finding emphasizes the importance of testing particular vendor implementations according to our two testing categories: adherence to functional security design and proper behavior under particular attacks motivated by security risks.

The latter category, adversarial security testing (linked directly to risk analysis findings), ensures that cards can perform securely in the field even when under attack. Risk analysis results can be used to guide manual security testing. As an example, consider the risk that, as designed, the object-sharing mechanism in Java Card is complex and thus is likely to suffer from security-critical implementation errors on any given manufacturer’s card. Testing for this sort of risk involves creating and manipulating stored objects where sharing is involved. Given a technical description of this risk, building specific probing tests is possible.

Automating Security Testing

Over the years, Cigital has been involved in several projects that have identified architectural risks in the GP/Java Card platform, suggested several design improvements, and designed and built automated security tests for final products (each of which has multiple vendors).

Several years ago, we began developing an automated security test framework for GP cards built on Java Card 2.1.1 and based on extensive risk analysis results. The end result is a sophisticated test framework that runs with minimal human intervention and results in a qualitative security testing analysis of a sample smart card. This automated framework is now in use at MasterCard and the U.S. National Security Agency.

The first test set, the functional security test suite, directly probes low-level card security functionality. It includes automated testing of class codes, available commands, and crypto functionality. This test suite also actively probes for inappropriate card behavior of the sort that can lead to security compromise.

The second test set, the hostile applet test suite, is a sophisticated set of intentionally hostile Java Card applets designed to probe high-risk aspects of the GP on a Java Card implementation.

Results: Nonfunctional Security Testing Is Essential

Most cards tested with the automated test framework (but not all) pass all functional security tests, which we expect because smart-card vendors are diligent with functional testing (including security functionality). Because smart cards are complex embedded devices, vendors realize that exactly meeting functional requirements is an absolute necessity for customers to accept the cards. After all, they must perform properly worldwide.

However, every card submitted to the risk-based testing paradigm exhibited some manner of failure when tested with the hostile applet suite. Some failures pointed directly to critical security vulnerabilities on the card; others were less specific and required further exploration to determine the card’s true security posture.

As an example, consider that risk analysis of Java Card’s design documents indicates that proper implementation of atomic transaction processing is critical for maintaining a secure card. Java Card has the capability of defining transaction boundaries to ensure that if a transaction fails, data roll back to a pre-transaction state. In the event that transaction processing fails, transactions can go into any number of possible states, depending on what the applet was attempting. In the case of a stored-value card, bad transaction processing could allow an attacker to “print money” by forcing the card to roll back value counters while actually purchasing goods or services. This is called a “torn transaction” attack in credit-card risk lingo.

When creating risk-based tests to probe transaction processing, we directly exercised transaction-processing error handling by simulating an attacker attempting to violate a transaction—specifically, transactions were aborted or never committed, transaction buffers were completely filled, and transactions were nested (a no-no according to the Java Card specification). These tests were not based strictly on the card’s functionality—instead, security test engineers intentionally created them, thinking like an attacker given the results of a risk analysis.

Several real-world cards failed subsets of the transaction tests. The vulnerabilities discovered as a result of these tests would allow an attacker to terminate a transaction in a potentially advantageous manner—a critical test failure that wouldn’t have been uncovered under normal functional security testing. Fielding cards with these vulnerabilities would allow an attacker to execute successful attacks on live cards issued to the public. Because of proper risk-based security testing, the vendors were notified of the problems and corrected the code responsible before release.

Comments

  1. –> “stored value with one factor”

    I agree that having the user participate in authentication is a reasonable idea.

    gem

  2. Sorry, my recent example wasn’t 2-factor, but challenge/response.

  3. Well, look abroad to Europe if you want to count just how many banks and credit card companies are happy to use one factor – you may have to redefine your notion of ‘nobody’.

    Even with two factor, there are still terrible flaws being created by the banks, e.g. they will ring the card holder up, and request a variety of digits from them to ‘take them through security’. So they are actively training their customers to treat cold-calls as being intrinsicly legitimate circumstances where they should reel off some of their digits to a complete stranger. It would probably take only 3 such cold-calls to have a better than 90% chance of possessing the necessary digits for the impostor to then authenticate with the real bank.

    Not only should you not remove the human from the authentication process, but you should not ignore innate human aptitude for unwittingly compromising authentication systems that they have been removed from.

  4. Gary. It seems you’ve missed the attack.
    Steal the card and the pin and you have the card holder’s money. Sheesh. :-}
    How do you get away with stealing the card? You kid the owner they still have it.
    How? Because the chip is in a standard location. Once you’ve scraped the chip, glue it back on another card, stick it in a hole in the wall and run off with the money.

    They keep the card along with the pretty hologram, and a fake chip. And as you say, how many owners can tell the difference between a good chip and a fake chip?

    The human card holder has been removed from the authentication process. They’re not watching, because they’ve been trained to ‘not get involved’ – because the banks think they’re too stupid (or as you suggest, too corrupt).

    Never treat your customers as the enemy. It’s a very slippery slope from considering that 1% of customers may take advantage of easy scams, to treating all customers that way.
    Your customers are your ally. They’re the only friend you have on the battlefield, and to avoid using their intelligence in the authentication process is a terrible waste of computing resources.

  5. Interesting attack idea. I’m afraid that one is covered rather nicely. It’s difficult to spoof most chips (especially the costly public key crypto bearing ones). A more interesting attack is to “brain in the vat” a chip card by sending interesting input to the chip and completely controlling its environment. For more, google up “Paul Kocher Differential Power Analysis”.

    The most obvious super modern attacks now take advantage of wireless acceptance devices. What fun.

    In the chip card case, the human carrying the card around is not considered a “good guy.” Any such assumption would result in an instantly compromised card set.

    gem

  6. I reminded that standardisation itself can be a vulnerability. The more characteristics that are predictable, the more amenable brute force attacks become.

    For example the ‘chip & pin’ credit cards have the chip circuitry in a specific position. How straightforward would it be to produce a machine that pressed the circuit out from the other side of the card by a millimetre or less so it could be scraped, replaced with a fake, and pressed back within a few tenths of a second? The card chip extractor can even collect the willing entry of the PIN from the unwitting customer (unlikely to make a close inspection of the card).

    I note that there’s been no attempt to educate the user about how they can authenticate the card reader or to be wary of such machines (deliberate entry of incorrect PIN). The reader doesn’t even have to demonstrate that it is actually reading anything from the card, e.g. the holder’s name (as also embossed).

    An authentic transaction is a human one. Remove the human, and you’ve lost your greatest ally.

  7. Hi Florian,

    There was almost nothing in the way of formal verification in Java Card design. The Mondex guys did a much more thorough job that way, but ended up with a commercial disaster. My opinion is that though formal verification has its merits, the market is not ready to bear the cost.

    Our testing at Cigital was driven by expert risk analysis. If you look in the book, you will find code examples that may shed more light on specific testing style. We found that risk-driven testing was able to probe very deeply into card behavior.

    The design did require some specific transaction behavior, and many of the risks we pointed out during architectural risk analysis were focused on that. (In fact, some of the killer risks that were too large to manage in the field and thus required design changes involved serious transitive trust issues in preliminary design.)

    gem

  8. Can you share any details how your testing interacted with formal verification? Wasn’t there any, or didn’t the specification require proper transactional behavior?