December 22, 2024

Software Transparency

Thanks to the recent NSA leaks, people are more worried than ever that their software might have backdoors. If you don’t believe that the software vendor can resist a backdoor request, the onus is on you to look for a backdoor. What you want is software transparency.

Transparency of this type is a much-touted advantage of open source software, so it’s natural to expect that the rise of backdoor fears will boost the popularity of open source code. Many open source projects are fully transparent: not only is the source code public, but the project also makes public the issue tracker that is used to manage known defects and the internal email discussions of the development team. All of these are useful in deterring backdoor attempts.

This kind of transparency often goes together with permissive licenses that allow users to redistribute the code and to modify it and distribute the modified version. That’s the norm in popular open source projects. But it’s possible in principle for a project to be transparent—making code, issue tracking, and design discussions public—while distributing the resulting code under a license that bans modification or redistribution. Such a product would be transparent but would not be free/open source.

Of course, having everything public does not ensure that there are no holes. The Debian project, which is transparent, had a serious security hole in its pseudorandom generator for several years. Transparency makes holes detectable but it doesn’t guarantee that they will be detected.

There’s a well-known saying in the open-source world, which Eric Raymond dubbed Linus’s Law: “given enough eyeballs, all bugs are shallow.” The idea is that the key to finding and fixing bugs effectively is to have many people looking at the code—even a bug that is hard for most people to detect will be obvious to a few.

But transparency does not guarantee that holes will be found, because there might not be enough eyeballs on the code. For open source projects, finding backdoors, or security vulnerabilities in general, is a public good, in the economists’ sense that effort spent on it benefits everyone, including those who don’t contribute any effort themselves. So it’s not obvious in advance that any particular open source project can avoid backdoors.

Even if there are enough eyes to rule out backdoors in the source code, you’re still not in the clear. Your system doesn’t run source code directly—it must be translated into machine code first. How can you be sure that the machine code running on your machine is really equivalent to the source code that was vetted? This is a famously difficult problem, and the subject of Ken Thompson’s famous Turing Award lecture, Reflections on Trusting Trust.

There is no simple solution to this object code vs. source code problem. Transparency is never easy. But in today’s world it is more important than ever.

Comments

  1. Hello, I am a college student that was given an assignment to find a blog I find interesting and reply to it. I’m still fairly new to the subject so please correct me if I’m wrong.

    It’s not very easy to make sure that a program is a COMPLETELY correct implementation of a mathematical algorithm or of an open standard. Even the smallest bug in a crypographic algorithm or protocol can be exploited. Writing a bug is much easier than spotting it. Many applications and OSes get security updates almost daily. Sure, they haven’t found all the bugs yet, but I’d like to believe that if the NSA has created backdoors in our open source software at some point, that those vulnerabilities have been patched already or will be soon enough.

  2. BanFrenchRoast says

    First of all, who says the primary threat to backdoors and hooks in open source comes from the NSA? Cybercrime has a bigger budget worldwide than the NSA and far greater motive to penetrate widely. The best in the world at crypto games have always been the Russians as matter of national and cultural pride. The best in the world at cyber offense in hardware, software, and network levels are undoubtedly the Chinese. The NSA is just as mediocre and likely in relative decline as the rest of the US industry and technology. Get over the NSA hysteria, it makes these so-called “security gurus” look silly.

    Secondly, given the access to source code in open source, anyone who thinks it is “secure” is living in a delusional world. Planting implementation bugs in open source by internal logic or leaving open exploits in the flaws of the environment, especially operating systems and large frameworks, is trivial. Strongly secure software does not exist. Time to think about security in depth at multiple layers, of which the network layer is the least strong one.

  3. Interesting idea and I see your point. With everyone able to view the source code we have the potential of making it far more secure than if the few programmers that wrote it were the only ones that can view the code. I am wondering though, after reading Ken Thompson 1984 lecture on “Reflections on Trusting Trust” that you linked to at the end of your blog. He talks about that these “kids” don’t know what they have just done when they hacked into a system. Indicating that they do not see hacking into a computer system as the same as breaking into someone’s house. My question is that since that was back in 1984 is it still like that today in 2013 and if so, how much of a problem does that pose for software transparency? Or would software transparency create an idea that “everybody knows how to do that so why should I hack into a computer system” and so deter some of the hacking that occurs?

  4. Though it is new, software transparency is an important concern that software developers have to deal with. As society is becoming more automated, transparency of public services and processes has acquired fundamental importance. The number of backdoor in systems using software whose source code is not publicly available are frequently exposed even though it is not widely credited. As software permeates our social lives, software transparency has become a quality criterion demanding more attention from software developers.
    However, there is the possibility of creating a backdoor without any modifications to the source code of a program. Achieving software transparency to this level of openness faces several roadblocks.

  5. Another Kevin says

    In the vein of the ‘trusting trust’ problem, I fear that the NSA has recruited outfits such as Intel, American Megatrends, and Phoenix to inflitrate our systems at the hardware and firmware level. I suspect that we all face keyloggers hidden from even the operating system, and even in my more paranoid moments whether the chips might not recognize the stereotyped sequence of operations that carry out a modular exponentiation (Comba and Karatsuba multipliers, perhaps), using it to sniff even long RSA keys.

    The idea sounds farfetched, but so do a great many paranoid notions that since Snowden’s revelations, have turned out to be true. I suspect that all our electronics are compromised at a level where we will have to go back to handwired discrete transistors or MSI blocks in order to build from a base we can trust.

    • “The idea sounds farfetched, but so do a great many paranoid notions that since Snowden’s revelations, have turned out to be true.”

      Paranoia is an interesting phenomenon; a study of psychology would detail why. But, in short [on paranoia] I just have to quote what I have said time and time again.

      “Just because I am paranoid (clinically speaking) does not mean I am wrong.” And much of the apparently paranoid thoughts I have had over the years have proved to be absolutely true. I agree with you on the likelihood that there are backdoors built into all the hardware. And all it takes is one critical recognition to assume this is fact.

      Long ago, many many years ago I said that the U.S. Government was spying on people’s phone calls and that the phone companies were accomplices to such spying. The very first thing that was publicized of what Snowden released to the journalist was a court order telling phone companies to give them all information about all phone calls. An obvious general warrant that is so obviously and blatantly a violation of the U.S. Constitution yet to whomever it was served at later revealed all the top phone companies in the U.S. not a single one person blew the whistle; no a single person or company came forward to fight it in court, not a single one refused to honor such a warrant; they all happily complied with and handed over all their phone records to the US Government.

      Why? Well, only they could answer that, I know if I was in such a position I would have been faxing that court document to every news agency out there even if I was ordered with a “gag order” by the court not to reveal anything–even if my legal beagles told me to comply or be shut down. You can’t compel ME to keep a secret that is so blatantly against the Constitution. But yet, I am paranoid and thus when I have the ability to prove that I am not wrong you cannot silence me without literally killing me.

      Most people are not so paranoid, and so when they were to see this court order they are cowed into complying with it including with the order to keep it completely top secret.

      There is no doubt that the same type of non-paranoid people are working in Intel, AM, Phoenix, Apple, Cisco, Microsoft, Adobe, Google, etc. etc. etc. I can guarantee it (though it may sound paranoid) but these companies have been hit with court orders similar in nature to what the phone companies where handed, and those court orders appear to have the power of the government behind them to the point that those companies are cowed (be it for economical reasons or under threat of imprisonment or whatever other way they do it) into complying with the court orders.

      And I fully suspect that yes, some of those secret court orders would included hardware backdoors. In fact the statement “trusting trust” not sure where it came from but it raises for me the ugly head of the Trusted Computing Platform. Everything about the TCP has been in my paranoid view untrustworthy and untrusted. Am I wrong? Only time would tell; but I am literally betting my life I am not wrong.

  6. The transparency values of open source code are limited to what you run on your own machine, under your own control. Once you are dealing with software as a service I see little difference between open source and closed source code, for the simple reason that you have no visibility into what the service provider is running and what changes they may have made to the code.

    • There are similar principles that apply to both “software as a service” and “running on our own machine” since most of us who do the latter haven’t physically built those machines by soldering together individual transistors from scratch…. Since we can no longer trust hardware vendors to act in the interest of their customers, we have to design things in a much more defensive way either way. While not a panacea, more openness in both hardware and software helps.

  7. The NSA has knowledge and capabilities beyond what outside experts know. So no matter how many eyeballs review the source code, they can compromise code that looks secure to everyone else.

    • They still have the laws of physics and mathematics to contend with, they haven’t broken those. The NSA isn’t God, they’re made up of people, just like the rest of us. And their resources, while astronomically large, aren’t technically infinite, they’re not omnipotent. Sure, you can’t *guarantee* that a piece of code doesn’t have an unnoticed hole, you never can, no matter what. But you can make that less and less likely, to the point where you have an extremely high probability of being safe, even from the NSA… To say otherwise is spreading FUD.

    • As they take an effort to lower security standards,it follows that they cannot break them without lowering.

  8. There is also the issue that poorly-done difficult-to-read code encourages fewer eyeballs to look at it, since nobody wants to be given a headache… Secure code therefore must be well written with special attention given to readability and ease of maintenance, so that it becomes a joy to work on, and that encourages more people to become involved in it.

    Also, poorly done code that’s so bad that it’s essentially become obfuscated code makes it easier for an adversary to purposefully hide a vulnerability in there, tacked onto another supposedly “helpful patch”… So there is a direct correlation here between insecurity and how much of a mess the code is. From a high level perspective, when everything’s very orderly and neat, everything becomes much easier to see.