October 12, 2024

Software Security: Creativity in a New Discipline

This is the last excerpt from my new book, Software Security: Building Security In. This might be a good time to buy the book.

Creativity in a New Discipline

We are experiencing a time of great creativity in computer security and must seize the opportunity presented by our current situation while we can. The diversity of backgrounds represented by today’s security practitioners may be a high-water mark. Consider that today’s security thought leaders were trained in fields as diverse as biostatistics, divinity, economics, and cognitive science, and thus bring with them interesting new perspectives on the security challenge. This leads to creative interplay in the field and has resulted in interesting progress, including the emergence of economic theories of security, an embrace of risk management, an emphasis on process-driven approaches (versus product sets), a shift toward software security, the rise of security engineering, and so on. As the worldwide security paradigm shift from guns, dogs, and concrete to networks, information systems, and computers continues unabated, we must leverage this time of creative diversity for all it’s worth.

A number of young researchers joined the computer security field in the mid-1990s, changing the focus of security research from spookware and national defense (think crypto, multilevel security, communications monitoring, and the like) to commercial systems and commerce. This movement away from military-oriented research was driven in part by the widespread public adoption of the Internet and the growing trend of e-commerce. With money at stake, security quickly became as relevant to business as it was to national defense. This influx of “new blood” shook up the scientific security research community and continues to have far-reaching effects that are only now affecting commercial security—the commercialization of firewalls, the rise of antivirus technology, and the adoption of modern security platforms, such as Java and .NET, were all predicted and spearheaded by new thinkers in the security research community.

Where Today’s Security People Come From

Only a handful of people working in computer security today started their careers in the field. In fact, academic programs expressly designed to train security practitioners are a recent phenomenon and remain rare.

Interestingly, it may be in this dearth of “qualified” people trained in security that a critical opportunity can be found. Though few practitioners have academic security training, they most assuredly do have academic training in some field of study. That means that as a collective, the computer security field is filled with diverse and interesting points of view. This is exactly the sort of Petri dish of ideas that led to the Renaissance at the end of the Dark Ages.

Diversity of ideas is healthy, and it lends a creativity and drive to the security field that we must take advantage of. A great example of this can be found in the new subfield of software security. Only five years ago the notion that bad software might be a major root cause of security issues was not common. Today, software security is the subject of keynote talks at the RSA security conference, and
we all seem to agree that we have a software problem to solve. This change was partially due to the involvement of programming languages people (once found only at obscure academic conferences like OOPSLA) in the security field. Such involvement resulted in the creation of modern languages like Java and .NET that include security models in their very design. When languages are declared “secure,” things get interesting! The evolutionary arms race between attackers and defenders jumps a level, new avenues for security design emerge, and dusty but thorny problems (think “buffer overflow”) become less relevant to the next generation of systems.

Where Tomorrow’s Security People Will Come From

These days, academic and professional training programs are being put in place to train the next generation of security professionals. Soon, standard curricula will be developed, and students will be required to understand the same core set of concepts. This will certainly help to solidify the field of computer security, but at the same time, there is a danger that generalization may lead to a homogenization of security. Instead of the creative soup afforded by a multiplicity of points of view spanning many fields, security runs the risk of becoming staid and static. If we are careful to avoid complete homogenization of the field, we can retain the benefits of diversity while building a solid academic discipline. One way to do this might be to encourage those students seeking computer security degrees to study widely in other supposedly unrelated disciplines as well. Another is to ensure that outside perspectives remain welcome in the field and are not dismissed out of hand. Computer security must remain an inclusive discipline in order to retain its creativity.

In any case, we must take advantage of the situation we find ourselves in now. Computer security is, in fact, experiencing an important rebirth, and now is the time to make great progress. We must pay close attention to different ideas, embrace change, and help security continue to evolve even as it begins to crystallize.

Software Security: A Case Study

Here is another excerpt from my new book, Software Security: Building Security In..

An Example: Java Card Security Testing

Doing effective security testing requires experience and knowledge. Examples and case studies like the one I present here are thus useful tools for understanding the approach.

In an effort to enhance payment cards with new functionality—such as the ability to provide secure cardholder identification or remember personal references—many credit-card companies are turning to multi-application smart cards. These cards use resident software applications to process and store thousands of times more information than traditional magnetic-stripe cards.

Security and fraud issues are critical concerns for the financial institutions and merchants spearheading smart-card adoption. By developing and deploying smart-card technology, credit-card companies provide important new tools in the effort to lower fraud and abuse. For instance, smart cards typically use a sophisticated crypto system to authenticate transactions and verify the identities of the cardholder and issuing bank. However, protecting against fraud and maintaining security and privacy are both very complex problems because of the rapidly evolving nature of smart-card technology.

The security community has been involved in security risk analysis and mitigation for Open Platform (now known as Global Platform, or GP) and Java Card since early 1997. Because product security is an essential aspect of credit-card companies’ brand protection regimen, companies like Visa and MasterCard spend plenty of time and effort on security testing and risk analysis. One central finding emphasizes the importance of testing particular vendor implementations according to our two testing categories: adherence to functional security design and proper behavior under particular attacks motivated by security risks.

The latter category, adversarial security testing (linked directly to risk analysis findings), ensures that cards can perform securely in the field even when under attack. Risk analysis results can be used to guide manual security testing. As an example, consider the risk that, as designed, the object-sharing mechanism in Java Card is complex and thus is likely to suffer from security-critical implementation errors on any given manufacturer’s card. Testing for this sort of risk involves creating and manipulating stored objects where sharing is involved. Given a technical description of this risk, building specific probing tests is possible.

Automating Security Testing

Over the years, Cigital has been involved in several projects that have identified architectural risks in the GP/Java Card platform, suggested several design improvements, and designed and built automated security tests for final products (each of which has multiple vendors).

Several years ago, we began developing an automated security test framework for GP cards built on Java Card 2.1.1 and based on extensive risk analysis results. The end result is a sophisticated test framework that runs with minimal human intervention and results in a qualitative security testing analysis of a sample smart card. This automated framework is now in use at MasterCard and the U.S. National Security Agency.

The first test set, the functional security test suite, directly probes low-level card security functionality. It includes automated testing of class codes, available commands, and crypto functionality. This test suite also actively probes for inappropriate card behavior of the sort that can lead to security compromise.

The second test set, the hostile applet test suite, is a sophisticated set of intentionally hostile Java Card applets designed to probe high-risk aspects of the GP on a Java Card implementation.

Results: Nonfunctional Security Testing Is Essential

Most cards tested with the automated test framework (but not all) pass all functional security tests, which we expect because smart-card vendors are diligent with functional testing (including security functionality). Because smart cards are complex embedded devices, vendors realize that exactly meeting functional requirements is an absolute necessity for customers to accept the cards. After all, they must perform properly worldwide.

However, every card submitted to the risk-based testing paradigm exhibited some manner of failure when tested with the hostile applet suite. Some failures pointed directly to critical security vulnerabilities on the card; others were less specific and required further exploration to determine the card’s true security posture.

As an example, consider that risk analysis of Java Card’s design documents indicates that proper implementation of atomic transaction processing is critical for maintaining a secure card. Java Card has the capability of defining transaction boundaries to ensure that if a transaction fails, data roll back to a pre-transaction state. In the event that transaction processing fails, transactions can go into any number of possible states, depending on what the applet was attempting. In the case of a stored-value card, bad transaction processing could allow an attacker to “print money” by forcing the card to roll back value counters while actually purchasing goods or services. This is called a “torn transaction” attack in credit-card risk lingo.

When creating risk-based tests to probe transaction processing, we directly exercised transaction-processing error handling by simulating an attacker attempting to violate a transaction—specifically, transactions were aborted or never committed, transaction buffers were completely filled, and transactions were nested (a no-no according to the Java Card specification). These tests were not based strictly on the card’s functionality—instead, security test engineers intentionally created them, thinking like an attacker given the results of a risk analysis.

Several real-world cards failed subsets of the transaction tests. The vulnerabilities discovered as a result of these tests would allow an attacker to terminate a transaction in a potentially advantageous manner—a critical test failure that wouldn’t have been uncovered under normal functional security testing. Fielding cards with these vulnerabilities would allow an attacker to execute successful attacks on live cards issued to the public. Because of proper risk-based security testing, the vendors were notified of the problems and corrected the code responsible before release.

Software Security: The Badness-ometer

Here is another excerpt from my new book, Software Security: Building Security In.

Application Security Tools: Good or Bad?

Application security testing products are being sold as a solution to the problem of insecure software. Unfortunately, these first-generation solutions are not all they are cracked up to be. They may help us diagnose, describe, and demonstrate the problem, but they do little to help us fix it.

Today’s application security products treat software applications as “black boxes” that are prone to misbehave and must be probed and prodded to prevent security disaster. Unfortunately, this approach is too simple.

Software testing requires planning and should be based on software requirements and the architecture of the code under test. You can’t “test quality in” by painstakingly finding and removing bugs once the code is done. The same goes for security; running a handful of canned tests that “simulate malicious hackers” by sending malformed input streams to a program will not work. Real attackers don’t simply “fuzz” a program with input to find problems. Attackers take software apart, determine how it works, and make it misbehave by doing what users are not supposed to do. The essence of the disconnect is that black box testing approaches, including application security testing tools, only scratch the surface of software in an outside→in fashion instead of digging into the guts of software and securing things from the inside.

Badness-ometers

That said, application security testing tools can tell you something about security—namely, that you’re in very deep trouble. That is, if your software fails any of the canned tests, you have some serious security work to do. The tools can help uncover known issues. But if you pass all the tests with flying colors, you know nothing more than that you passed a handful of tests with flying colors.

Put in more basic terms, application security testing tools are “badness-ometers,” as shown in the figure above. They provide a reading in a range from “deep trouble” to “who knows,” but they do not provide a reading into the “security” range at all. Most vulnerabilities that exist in the architecture and the code are beyond the reach of simple canned tests, so passing all the tests is not that reassuring. (Of course, knowing you’re in deep trouble can be helpful!)

The other major weakness with application security testing tools is that they focus only on input to an application provided over port 80. Understanding and testing a complex program by relying only on the protocol it uses to communicate provides a shallow analysis. Though many attacks do arrive via HTTP, this is only one category of security problem. First of all, input arrives to modern applications in many forms other than HTTP: consider SSL, environment variables, outside libraries, distributed components that communicate using other protocols, and so on. Beyond program input, software security must consider architectural soundness, data security, access control, software environment, and any number of other aspects, all of which are dependent on the application itself. There is no set of prefab tests that will probe every possible application in a meaningful way.

The only good use for application security tools is testing commercial off-the-shelf software. Simple dynamic checks set a reasonably low bar to hold vendors to. If software that is delivered to you fails to pass simple tests, you can either reject it out of hand or take steps to monitor its behavior.

In the final analysis, application security testing tools do provide a modicum of value. Organizations that are just beginning to think through software security issues can use them as badness-ometers to help determine how much trouble they are in. Results can alert all the interested parties to the presence of the problem and motivate some mitigation activity. However, you won’t get anything more than a rudimentary analysis with these tools. Fixing the problems they expose requires building better software to begin with—whether you created the software or not.

Software Security: The Trinity of Trouble

[Ed Felten says: Please welcome Gary McGraw as guest blogger for the next week. Gary is CTO at Cigital and co-author of two past books with me. He’s here to post excerpts from his new book, Software Security: Building Security In, which was released this week. The book offers practical advice about how to design and build secure software – a problem that is hugely important and often misunderstood. Now here’s Gary….]

The Trinity of Trouble: Why the Problem is Growing

Most modern computing systems are susceptible to software security problems, so why is software security a bigger problem now than in the past? Three trends—together making up the trinity of trouble—have a large influence on the growth and evolution of the problem.

Connectivity. The growing connectivity of computers through the Internet has increased both the number of attack vectors and the ease with which an attack can be made. This puts software at greater risk. More and more computers, ranging from home PCs to systems that control critical infrastructure, such as the supervisory control and data acquisition (SCADA) systems that run the power grid, are being connected to enterprise networks and to the Internet. Furthermore, people, businesses, and governments are increasingly dependent on network-enabled communication such as e-mail or Web pages provided by information systems. Things that used to happen offline now happen online. Unfortunately, as these systems are connected to the Internet, they become vulnerable to software-based attacks from distant sources. An attacker no longer needs physical access to a system to exploit vulnerable software; and today, software security problems can shut down banking services and airlines (as shown by the SQL Slammer worm of January 2003).

Because access through a network does not require human intervention, launching automated attacks is easy. The ubiquity of networking means that there are more software systems to attack, more attacks, and greater risks from poor software security practices than in the past. We’re really only now beginning to cope with the ten-year-old attack paradigm that results from poor coding and design. Ubiquitous networking and attacks directly related to distributed computation remain rare (though the network itself is the primary vector for getting to and exploiting poor coding and design problems). This will change for the worse over time. Because the Internet is everywhere, the attackers are now at your virtual doorstep.

To make matters worse, large enterprises have caught two bugs: Web Services and its closely aligned Service Oriented Architecture (SOA). Even though SOA is certainly a fad driven by clever marketing, it represents a succinct way to talk about what many security professionals have always known to be true: Legacy applications that were never intended to be internetworked are becoming inter-networked and published as services.

Common platforms being integrated into megasolutions include SAP, PeopleSoft, Oracle, Informatica, Maestro, and so on (not to mention more modern J2EE and NET apps), COBOL, and other ancient mainframe platforms. Many of these applications and legacy systems don’t support common toolkits like SSL, standard plug-ins for authentication/authorization in a connected situation, or even simple cipher use. They don’t have the builtin capability to hook into directory services, which most large shops use for authentication and authorization. Middleware vendors pledge they can completely carve out the complexity of integration and provide seamless connectivity, but even though they provide connectivity (through JCA, WBI, or whatever), the authentication and application-level protocols don’t align.

Thus, middleware integration in reality reduces to something ad hoc like cross-enterprise FTP between applications. What’s worse is that lines of business often fear tight integration with better tools (because they lack skills, project budget, or faith in their infrastructure team), so they end up using middleware to FTP and drop data globs that have to be mopped up and transmogrified into load files or other application input. Because of this issue, legacy product integrations often suffer from two huge security problems:
1. Exclusive reliance on host-to-host authentication with weak passwords
2. Looming data compliance implications having to do with user privacy (because unencrypted transport of data over middleware and the middleware’s implementation for failover and load balancing means that queue cache files get stashed all over the place in plain text)

Current trends in enterprise architecture make connectivity problems more problematic than ever before.

Extensibility. A second trend negatively affecting software security is the degree to which systems have become extensible. An extensible system accepts updates or extensions, sometimes referred to as mobile code so that the functionality of the system can be evolved in an incremental fashion. For example, the plug-in architecture of Web browsers makes it easy to install viewer extensions for new document types as needed. Today’s operating systems support extensibility through dynamically loadable device drivers and modules. Today’s applications, such as word processors, e-mail clients, spreadsheets, and Web browsers, support extensibility through scripting, controls, components, and applets. The advent of Web Services and SOA, which are built entirely from extensible systems such as J2EE and .NET, brings explicit extensibility to the forefront.

From an economic standpoint, extensible systems are attractive because they provide flexible interfaces that can be adapted through new components. In today’s marketplace, it is crucial that software be deployed as rapidly as possible in order to gain market share. Yet the marketplace also demands that applications provide new features with each release. An extensible architecture makes it easy to satisfy both demands by allowing the base application code to be shipped early, with later feature extensions shipped as needed.

Unfortunately, the very nature of extensible systems makes it hard to prevent software vulnerabilities from slipping in as unwanted extensions. Advanced languages and platforms including Sun Microsystems’ Java and Microsoft’s .NET Framework are making extensibility commonplace.

Complexity. A third trend impacting software security is the unbridled growth in the size and complexity of modern information systems, especially software systems. A desktop system running Windows XP and associated applications depends on the proper functioning of the kernel as well as the applications to ensure that vulnerabilities cannot compromise the system. However, Windows XP itself consists of at least forty million lines of code, and end-user applications are becoming equally, if not more, complex. When systems become this large, bugs cannot be avoided.

The figure above shows how the complexity of Windows (measured in lines of code) has grown over the years. The point of the graph is not to emphasize the numbers themselves, but rather the growth rate over time. In practice, the defect rate tends to go up as the square of code size. Other factors that significantly affect complexity include whether the code is tightly integrated, the overlay of patches and other post-deployment fixes, and critical architectural issues.

The complexity problem is exacerbated by the use of unsafe programming languages (e.g., C and C++) that do not protect against simple kinds of attacks, such as buffer overflows. In theory, we could analyze and prove that a small program was free of problems, but this task is impossible for even the simplest desktop systems today, much less the enterprise-wide systems used by businesses or governments.

Of course, Windows is not alone. Almost all code bases tend to grow over time. During the last three years, I have made an informal survey of thousands of developers. With few exceptions (on the order of 1% of sample size), developers overwhelmingly report that their groups intend to produce more code, not less, as time goes by. Ironically, these same developers also report that they intend to produce fewer bugs even as they produce more code. The unfortunate reality is that “more lines, more bugs” is the rule of thumb that tends to be borne out in practice (and in science, as the next section shows). Developers are an optimistic lot.

The propensity for software systems to grow very large quickly is just as apparent in open source systems as it is in Windows. The problem is, of course, that more code results in more defects and, in turn, more security risk.