December 13, 2024

CD DRM Makes Computers Less Secure

Yesterday, Sysinternals’s Mark Russinovich posted an excellent analysis of a CD copy protection system called XCP2. This scheme, created by British-based First4Internet, has been deployed on many Sony/BMG albums released in the last six months. Like the SunnComm MediaMax system that I wrote about in 2003, XCP2 uses an “active” software-based approach in an attempt to stifle ripping and copying. The first time an XCP2-protected CD is inserted into a Windows system, the Windows Autorun feature launches an installer, which copies a small piece of software onto the computer. From then on, if the user attempts to copy or rip a protected CD, the software replaces the music with static.

This kind of copy protection has several weaknesses. For instance, users can prevent the active protection software from being installed by disabling autorun or by holding the shift key (which temporarily suspends autorun) while inserting protected discs. Or they can remove the software once it’s been installed, as was easily accomplished with the earlier SunnComm technology. Now, it seems, the latest innovations in CD copy protection involve making the protection software harder to uninstall.

What Russinovich discovered is that XCP2 borrows techniques from malicious software to accomplish this. When XCP2 installs its anti-copying program, it also installs a second component which serves to hide the existence of the software. Normally, programs and data aren’t supposed to be invisible, particularly to system administrators; they may be superficially hidden, but administrators need to be able to see what is installed and running in order to keep the computer secure. What kind of software would want to hide from system administrators? Viruses, spyware, and rootkits (malicious programs that surreptitiously hand over control of the computer to a remote intruder). Rootkits in particular are known for their stealthiness, and they sometimes go to great lengths to conceal their presence, as Russinovich explains:

Rootkits that hide files, directories and Registry keys can either execute in user mode by patching Windows APIs in each process that applications use to access those objects, or in kernel mode by intercepting the associated kernel-mode APIs. A common way to intercept kernel-mode application APIs is to patch the kernel’s system service table, a technique that I pioneered with Bryce for Windows back in 1996 when we wrote the first version of Regmon. Every kernel service that’s exported for use by Windows applications has a pointer in a table that’s indexed with the internal service number Windows assigns to the API. If a driver replaces an entry in the table with a pointer to its own function then the kernel invokes the driver function any time an application executes the API and the driver can control the behavior of the API.

Sure enough, XCP2 adopts the latter technique to conceal its presence.

Russinovich is right to be outraged that XCP2 employs the same techniques against him that a malicious rootkit would. This makes maintaining a secure system more difficult by blurring the line between legitimate and illegitimate software. Some users have described how the software has made their anti-virus programs “go nuts,” caused their system to crash, and cost them hours of aggravation as they puzzled over what appeared to be evidence of a compromised system.

But things are even worse than Russinovich states. According to his writeup, the XCP driver is indiscriminant about what it conceals:

I studied the driver’s initialization function, confirmed that it patches several functions via the system call table and saw that its cloaking code hides any file, directory, Registry key or process whose name begins with “$sys$”. To verify that I made a copy of Notepad.exe named $sys$notepad.exe and it disappeared from view.

Once the driver is installed, there’s no security mechanism in place to ensure that only the XCP2 software can use it. That means any application can make itself virtually invisible to standard Windows administration tools just by renaming its files so that they begin with the string “$sys$”. In some circumstances, real malicious software could leverage this functionality to conceal its own existence.

To understand how, you need to know that user accounts on Windows can be assigned different levels of control over the operation of the system. For example, some users are granted “administrator” or “root” level access—full control of the system—while others may be given more limited authority that allows them to perform every day tasks but prevent them from damaging other users’ files or impairing the operation of the computer. One task that administrators can perform that unprivileged users cannot is install software that uses the cloaking techniques that XCP2 and many rootkits employ. (Indeed, XCP2 is unable to install unless the user running it has administrator privileges.)

It’s a good security practice to give users as little permission as they need to do their jobs—we call this the “Principle of Least Privilege” in the security trade—because, among other reasons, it restricts the activities of malicious software. If every user on a system has administrator access, any malicious programs that become installed can put up their own cloaking mechanisms using the same techniques that XCP2 uses. However, consider what happens when there are multiple accounts on the system, some with Administrator access and some with more limited control. Such a setup is fairly common today, even on family computers. If the administrator uses a CD that installs XCP2, the XCP2 cloaking driver will be available to applications installed by any user on the system. Later, if one of the unprivileged users installs some malware, it can use the XCP2 driver to hide itself from the user and the Administrator, even though it wouldn’t have permission to perform such cloaking on its own.

This kind of security bug is called a “privilege escalation vulnerability.” Whenever such a vulnerability is discovered in Windows, Microsoft quickly rolls out a patch. If Sony and First4Internet have any regard for their customers’ security, they must immediately issue a fix for this serious problem.

Copy protection vendors admit that their software is merely a “speedbump” to copyright infringement, so why do they resort to such dangerous and disreputable means to make their systems only marginally more difficult to bypass? One of the recording industry’s favorite arguments why users should avoid P2P file sharing is that it can expose them to spyware and viruses. Thanks to First4Internet’s ill-conceived copy protection, the same can now be said of purchasing legitimate CDs.

In case you haven’t already disabled Autorun, now might be a good time.

Comments

  1. […] That means that any hacker who can gain even rudimentary access to a Windows machine infected with the program now has the power to hide anything he wants under the “$sys$” cloak of invisibility. Criticism of Sony has largely focused on this theoretical possibility — that black hats might piggyback on the First 4 Internet software for their own ends. […]

  2. Every day I’m sure that owning and using a Mac is the only inteligent way to be connected to the web.

  3. Joaquin Menchaca says

    How can I respond to Sony. I would like to write a letter to their executive managment. I am seriously in a state of shock by this low level of ethics from Sony. I am considering boycotting Sony’s products and encouraging others, especially fellow IT Administrators to do the same. Is that unreasonable?

    I wonder if why there are criminal proceedings? Isn’t this like cyber-terrorism?

  4. Just a couple of months back SONY electronics admitted that forcing people to use their prorietary ATRAC system for music playback cost them dearly as they missed the whole MP3 revolution (sales wise) as a result. Of course it does not surprise me that they are trying now to control how you listen to music on your PC as well, through a hidden program that makes changes to your registry without your permission. DRM should not become HRM (Human rights management).

  5. Just think how bad this stuff could get if you where running this with a PC enabled and running on the trust computing platform. Better to stop this crap now.

  6. @Dan –

    What I was implying, nay stating emphatically, was that stealth technology has been used by “good guys” for a few years now, typically in cases where enterprises own the PC (not the user) and don’t want the user messing with it. In addition, some security software use these techniques to protect themselves against malware that could terminate the process. The point is that simply hiding things doesn’t automatically mean something is a “rootkit”.

  7. This is a major oops by the companies involved. I am telling everyone I know to turn off autorun on their computers.

  8. @Peter: it is common for enterprise versions of endpoint security software to ensure that it can’t be uninstalled by the user, since they don’t actually own the machines – the enterprise does. The user is generally not installing it either, and doesn’t have local admin privileges.

  9. ‘Pete’: I’ve never seen a client security program that attempts to hide itself from the user. If the user has installed it, why should it be hidden?

    This malicious software (hiding software from the user is malicious, although not very harmful) should be detected on installation by antivirus software. It’s just a shame that there’s no behaviour blocking antivirus that would detect this kind of thing.

  10. “Russinovich is right to be outraged that XCP2 employs the same techniques against him that a malicious rootkit would. ”

    Just about every client security program – antivirus, firewalls, host intrusion prevention, etc.. employ the same techniques as well. Functional purpose should drive the outrage, not surprise over techniques that have been used for years, even by the “good guys”.