July 27, 2016

avatar

Report: Many Apps Misconfigure Security Settings

My fellow Princeton computer scientists Sudhakar Govindavajhala and Andrew Appel released an eye-opening report this week on access control problems in several popular applications.

In the old days, operating systems had simple access control mechanisms. In Unix, each file belonged to an owner and a (single) group of users. The owner had the option to give the other group members read and/or write permission, and the option to give everybody read and/or write permission. That was pretty much it.

Over time, things have gotten more complicated. Windows controls access to about fifteen types of objects, with about thirty different flavors of privileges that can each be granted or denied, for any object, to any user or group of users. Privileges can be managed with great precision. In theory, this lets people grant others the absolute minimum privileges they need to do their jobs, which is good security practice.

The downside of this complexity is that if the system is hard to understand, people will make mistakes. End users will surely make mistakes. But you might think that big software companies can manage this complexity and will get the security settings on their products right.

Which brings us to Sudhakar and Andrew’s research. They built an automated tool to analyze the access control settings on files, registry entries, and other objects on a Windows machine. The tool looks at the settings on the machine and applies a set of inference rules that encode the various ways a user could try to leverage his privileges improperly. For example, one rule says that if Alice has the privilege to modify a program, and Bob runs that program, then Alice can use any of Bob’s privileges. (She can do this by adding code to the program that does what she wants; when Bob runs the program, that code will run with Bob’s privileges.) The tool looks for privilege escalation attacks, or ways for a relatively unprivileged user to gain more privilege.

Sudhakar and Andrew ran the tool on professionally-managed Windows systems, and the results were sobering. Several popular applications, from companies like Adobe, AOL, Macromedia, and Microsoft, had misconfigured their access control in ways that allowed relatively unprivileged users – in some cases even the lowliest Guest account – to gain full control of the system.

Sudhakar and Andrew notified the affected vendors well before publishing the paper, and some of the problems they found have been patched. But some problems remain, and testing on new systems tends to find still more problems.

There are two lessons here. First, complicated security mechanisms lead to mistakes, even among relatively sophisticated software developers and companies, so the desire to control privileges precisely must be tempered by the virtue of simplicity. Second, if you’re going to have a complicated system, you probably need tools to help you figure out whether you’re using it safely.

Comments

  1. avatar Edward Kuns says:

    This goes partly to something I said here in another topic, that even to this day security is an afterthought. It is more costly to design software to be secure. You have to have trained developers and testers. Software is secure only if a concerted effort has been made and maintained in its development and maintenance. I believe that most companies producing software don’t see that their software is or can be a security risk, and truly don’t understand risk and how to manage and mitigate risk.

    This problem may have to somehow be solved in the operating system. Perhaps by a stronger separation of rights between running and installing software.

  2. EK: If I have access to write to the disk, I can install software. Period. There’s no way for the OS to stop me. (I’d be up a creek if I couldn’t set the executable bit execept as root, and what security would it buy me really?)

  3. “I believe that most companies producing software don’t see that their software is or can be a security risk, and truly don’t understand risk and how to manage and mitigate risk.”

    But I doubt that shoving the issue onto the OS designers — even if they have access to really good tools — is going to be a useful solution. If an OS makes a paranoid-enough set of tradeoffs between security and convenience to be seriously safe, it’s either going to end up unusable or else spawn backdoors and workarounds that put you back where you started, only in a less well-documented state.

  4. I know this is kind of a strange position, but I think this (as well as many other problems in windows) has it’s roots, or at least some of them, in package management (or lack thereof).

    I have a feeling that a /lot/ of *nix software would be installed similarly if the software developers simply threw together an ELF that installed the software… If you read through enough installation documentation, you’re bound to find quite a few packages that ask for overly-liberal permissions. However, the people who actually package the software tend to know a bit more about permissions (one would hope, as it’s more part of their job description than the developers’), and their decisions are reviewed by the community both before and after acceptance of the packages. Debian, at least, has some rather strict rules indeed.

    Of course, a lot of people use 3rd party repositories (eg, PLF), which I doubt are all held to such a high standard, but the installation itself is still fairly transparent–on DPKG systems, dpkg -L packagename will show you everything, on RPM, I think it’s rpm -q –filesbypkg. Try getting a list of everything an *.exe did.

    As far as the complexity of permissions is concerned, I’m sure it does play a part, and probably a large one at that. However, it seems that the added security that these mechanisms offer could provide positive overall effect on security with a transparent package management solution. I could go on for quite a while on the security implications of package management, so I’ll just shut up now.

  5. avatar Edward Kuns says:

    Bill,

    The issue isn’t that one can install software. The issue is that one can install software that has arbitrary privileges. Under UNIX (which is not the epitome of all that is perfect, but I use this as one example) as an unprivileged user, I can install anything I want. But that software cannot run with rights that I do not have myself.

    Windows has a very poor segregation of user vs administrator rights — for the obvious reason that this poor segregation makes it easier to use — which leads to people installing software that does things they do not know it does. The average user probably will never understand the various categories of rights that are important for security. The average developer also does not understand security, but should.

    Bill and Paul,

    Are you both suggesting that we should just throw up our hands and declare security an unsolvable problem? Or if not, then what is an alternate solution?

    Another possible solution I thought of after my first post above is that perhaps Microsoft should provide some automated security scanning tool for developers (not necessarily free, but free would encourage higher security) and set more specific, detailed standards for application security. That is, any application that claims to be compliant with the Microsoft standards should meet basic security requirements. By setting these basic requirements (and assuming there is enough openness that security researchers can report on what they find), average security will improve.

    Of course, Microsoft is not the only OS out there that has security issues. (of course) But it is the one being discussed here.

  6. The logical thing would seem to be to allow users to “install” software that is allowed the same level of access they are (or is further restricted). None of this nonsense of requiring admin-level access to install applications (which in turn means that applications can inherit admin-level access).

  7. Edward: One reason that security is an afterthought is that software has rather weak product liability.

  8. avatar Ned Ulbricht says:

    One reason that security is an afterthought is that software has rather weak product liability.

    cm,

    Bruce Schneier has persistently made this argument. But imposing strict liability on coders or vendors might create an unacceptable risk barrier against participation in the OpenBSD development model.

  9. Bruce Tognazinni has an excellent rant in which he argues that the problem is that we spend too much effort making machines theoretically securable, as opposed to secure in practise. It ought to be required reading for anyone designing security mechanisms.

    Having been in the software industry for more than 30 years, I can attest that the attitude of “let’s make it easy for the customer to do what he wants and simultaneously keep his machine secure” is frequently replaced with “let’s make it possible to secure every attribute of everything”. The result is a system that is often insecure, but in every case the vendor can point at what the customer did wrong.

    The customer, on the other hand, struggles with a situation where he can’t do something appropriate because the system says “access denied”. He changes one partially-understood security setting after another, until he stumbles onto a combination that allows his application to work. Generally, he has no understanding of what risks he’s just created (or if he does, he doesn’t know how to mitigate these risks while still making his application work).

    In short, complexity is part of the problem. History is another part of the problem — most of these systems originated in a world where the machines were single-user and not networked. In that world, it was sufficient to protect against accidents. In today’s world with multi-user and networked machines however, it’s necessary to protect against actual malice, and wide-spread malice at that.

    While Govindavajhala and Appel’s work will help a professional system administrator figure out where his multi-user system has security issues, it probably won’t help the owner of a personal system at all. His biggest problems stem from an increasing confusion of what’s a program and what’s merely data. Generally, he thinks that visiting a web site, inserting a CD, viewing a picture, playing a flash movie and so forth are all benign activities. From his mental model, they’re all just rendering data. From an implementation viewpoint however, many of these potentially execute code that was written by the content author. HTML and Flash intentionally contain programs (scripts) that are automatically run when a page is viewed. CD’s contain programs that are automatically run upon insertion
    recommend it highly. Some content types are not designed to contain programs, but bugs like buffer overruns sometimes cause certain implementations to execute malicious code.

    The situation with respect to the bugs is simultaneously heartening and depressing: there are systems in place such that vendors fix these bugs promptly after discovery, usually before they’ve been publicly reported. On the other hand, there are a lot of evil people who analyze security fixes as soon as they’re released in order to exploit the bug against machines that are slow to be updated.

    With respect to media that are intended to contain programs, the situation is just discouraging — they’re proliferating like mad. Microsoft brought embedded programs to text documents and spreadsheets. Netscape brought scripting to HTML documents. Macromedia brought scripting to movies. Microsoft brought scripting to email messages, by making HTML email ubiquitous.

    This trend shows no evidence of slowing down. For example, HD-DVD and BluRay are competing to see who can put the most non-optional scripting into their systems.

    Unfortunately, I don’t have solutions.

    Disclaimer: I’m employed by Microsoft, but speaking solely for myself.

  10. avatar Anonymous says:

    Why dont you say it openly. You think SunnComm is a company that cant get security right. BS, FTK cant get research right. Why no analysis of Totalplay that the Macromedia people sell? Because one of there directors is a Princeton alumni, no doubt. Our very astute CEO once said academics are entrepreneours that cant earn a buck. If you cant do it, teach it and if you cant teach it, become an academic researcher in it. AH and EF are AHEF and that means dumbass in Swahilli. Thats in Africa if you dont know. LOL LOL LOL

  11. Let’s bite the troll (and hope I don’t get his rabies).
    Prof. Felten has a limited amount of time to spend on research. When cryptographic theory says that DRM is snakeoil and analysis of three products shows that too, there is no scientific urgency to investigate the fourth.

  12. Ned: That’s probably so. Otherwise, my comment was meant to be purely descriptive. I’m not an exepert in legal matters, ad don’t know whether there is a distinction between commercial products and “hobby” products. Most product liability is in the context of software operating physical equipment, where the liability is really malfunctioning or damage of/from the equipment, or software operating business processes where malfunction has direct legal or financial implications, where I suspect the liability comes in through the business process.

    Otherwise, whether you pay or not, it’s always “as is”, or any damages are specifically excluded in the license. Which is the status quo in applications that are not directly involved with life and death. In some cases I have read exclusions to the effect that the software is not intended for safety-critical application.

  13. Edward: I’m not throwing up my hands. I think Jim Lyons has a bunch of good points, although perhaps he’s too pessimistic. Tools that can tell developers what security holes they’re opening are good (especially if they pick up stuff that may not have been on mental threat lists). What I’d also like to see emerging from the use of such tools is some kind of “best practices” body of knowledge that would show developers ways to do the kinds of flashy things they want while opening the minimum possible set of holes in the user’s system. That way, users could lock their computers down more effectively without forgoing all the good stuff. (And eventually you’d get a culture going where if a program asked you to do X so that it could run, you’d know it was a lousy or malicious piece of software. Ditto with data that insisted you do something hinky to be able to display it.)

    Yeah, such best practices would slow down so-called innovation, and would put a crimp in developers’ abilities to do things that users wouldn’t, on reflection, want them to do. Hence, unlikely to propagate in the real world.

  14. “In the old days, operating systems had simple access control mechanisms…”

    Well, I think that has less to do with “the old days” and more to do with Unix, which from its beginning was simpler than its contemporaries. Unix has (had) an unusually simple access control mechanism. Compare it to VMS, which has security mechanisms much more like MSWindows’: thirty-odd privilege bits, complex per-file ACLs, and so on. And it has the same kinds of problems as MSWindows, where the complexity of the security settings made it harder to keep track of what was going on.

    (The similarity of Windows and VMS is no coincidence; Dave Cutler was a designer of each.)

  15. Unix does have ACLs these days, but the main privilege-granting operation remains the setuid bit. Which gets cleared by the system as soon as anybody writes to a file. One major road to privilege escalation closed, permanently.

    And then there’s the installer madness. Why the hell do you need installers? Well-designed Mac programs, just to give one example, don’t need them — drag the thing onto your hard disk and you’re done. There’s no reason at all Windows couldn’t do the same thing. Mac and Unix have package management systems; some (think Debian’s apt+dpkg system as one example) are so far beyond the Windows situation it’s not even remotely funny.

    There’s no reason at all Micro$oft couldn’t do the same thing.

  16. Its good to know; but i think there would be lots of problems in applicabity. As MULVAL needs to be installed in clients machines and its quite suprising about acceptability.

    None of the customers accept any of the application being installed in servers – even as a agent..

    think again