July 15, 2024

Avoid an Equifax-like breach? Help us understand how system administrators patch machines

The recent Equifax breach that leaked around 140 million Americans’ personal information was boiled down to a system patch that was never applied, even after the company was alerted to the vulnerability in March 2017.

Our work studying how users manage software updates on desktops and mobile tells a story that keeping machines patched is far from simple. Often, users do not want to apply patches because they do not trust the vendors who create the patches, the patches are applied in ways that cause too much downtime, or because the user interface changes updates make, upset users’ workflow. However, if we are going to better understand and help improve the way patches are applied so that breaches like the Equifax one are easier to avoid, we need to also study how system administrators patch multiple machines. The end goal of this work is to improve the software updating experience for everyday users as well as system administrators and enhance cybersecurity overall—after all what’s a patch really worth if it’s never installed.

You can help us to achieve this goal by forwarding our survey for system administrators who manage software updates to people you know in the United States who are over 18 years of age. If you are a system administrator who manages updates for your organization, we’d greatly appreciate you taking 10-15 minutes to complete this survey. System administrators who manage updates can also participate by signing up for an hour remote interview. As a token of our appreciation, we are raffling off a Samsung Galaxy S8 to participants who complete the survey. Each interviewees will also be given a $20 Amazon gift card.

To learn more about our work, visit our project page, and please reach out to us at any time if you have any questions.


  1. Andrew McConachie says

    I remember a time when patching wasn’t considered a normal procedure. It wasn’t an expectation that when you bought a computer system it would require constant patching. Perhaps people wrote better software then, or perhaps things were just a lot less complicated.

    I find much of this discourse around patching problematic because it places the burden on the wrong actor. The only reason we have to patch anything at all is because software developers can’t get it right the first time. Instead of focusing on patching we need to be figuring out ways to write bug free software. (I can hear you laughing at this point, but hear me out.)

    Once software has been distributed to its users it becomes inordinately expensive to address bugs. The sooner bugs are found in the development process the cheaper they are to fix. This is well known. But somehow it has become acceptable for many companies to release crap software that isn’t tested properly, and then blame users for not patching. How did we get here?

    Release early and often is fine if you’re a kernal hacker, but terrible if you’re a normal person. Move fast and break things is fine if you’re running a website, but terrible for embedded systems. It can’t always be about time to market. At some point people need to give a about code quality, and as long as the discourse stays centered on ‘why can’t users patch?’ we’re never going to get there.

    • Agreed – examining how developers create patches and software in the first place is also another important piece of enhancing cybersecurity. (Un)fortunately, software does evolve once its been deployed – whether to fix bugs or update the performance or interface etc – and that is hard to change. However, by understanding how to improve the software development process, we can certainly ensure that security does not rely solely on end-users who must adopt patches. Developers can not only help improve the quality of code but also the information about patches to help users apply patches in the first place. In our earlier work for instance, we found that users often do not have sufficient information about what a patch does to make good decisions about whether to apply an update or not. Developers often do not include detailed or relevant information on how long the installation will take, what changes the patch will make that affect the user, or whether a patch will cause compatibility issues with other software. Some of this information is not known to developers but certainly creating more informative patches as well as improving software development processes could improve the patching ecosystem as a whole. Time for further research studies!

  2. Andre Gironda says

    The vuln that led to the remotely-exploitable condition was due to an appdev patch, not a system patch. It was a JAR (Java ARchive file) that needed to be updated. Sometimes JAR files are embedded in code or code bundles (they are themselves code bundles), typically WAR or EAR files. Other times code indirection calls or loads these dependencies when it doesn’t appear that it might or should. Thus, a system administrator (unless also an app developer familiar with these amorphisms, which is an extremely-rare scenario) would have no idea how to work with this sort of patch.

    There is a tool called OWASP Dependency Check that is usable by system administrators (as it is not complicated to run, but possibly the output is difficult to parse without a vulnerability-management specialization from a strong information security background). It can often find out-of date JAR files that are fingerprinted to contain vulnerabilities, such as the one in question. However, the amorphic conditions that arise from JAR-in JAR, JAR-in WAR/EAR, WAR/EAR-in WAR/EAR, especially compounded with code indirection, create a significant amount of false positives.

    Recently, a very-large bank did an analysis that covered several layers deep by unzipping JAR/EAR/WAR files across their entire infrastructure (of several dozen operating systems), but still only achieved one-third coverage of the outdated Java Struts code that leads to the remotely-exploitable vulnerably in question. That’s the size of the problem. Something a highly-capable and Java-fluent appdev team may be able to solve, but not a task for a sysadmin or infosec professional — especially not a single professional — nor even small team of them.

    • Thanks for the comment – definitely the problem of keeping machines secure cannot rely solely on system administrators or even end-users – as you point out sometimes entire teams can fail to achieve coverage. Our work is geared at understanding the different stakeholder points of view to see how we can inform both design and policy to improve the situation – but as you point out, this is tackling only a small part of a problem that requires more than an easy fix!