December 14, 2024

NYU/Princeton Spyware Workshop Liveblog

Today I’m at the NYU/Princeton spyware workshop. I’ll be liveblogging the workshop here. I won’t give you copious notes on what each speaker says, just a list of things that strike me as interesting. Videos of the presentations will be available on the net eventually.

I gave a basic tutorial on spyware last night, to kick off the workshop.

The first panel today is officially about the nature of the spyware problem, but it’s shaping up as the law enforcement panel. The first speaker is Mark Eckenwiler from the U.S. Department of Justice. He is summarizing the various Federal statutes that can be used against spyware purveyors, including statutes against wiretapping and computer intrusions. One issue I hadn’t heard before involves how to prove that a particular spyware purveyor caused harm, if the victim’s computer was also infected with lots of other spyware from other sources.

Second speaker is Eileen Harrington of the Federal Trade Commission. The FTC has two main roles here: to enforce laws, especially relating to unfair and deceptive business practices, and to run hearings and study issues. In 1995 the FTC ran a series of hearing on online consumer protection, which identified privacy as important but didn’t identify spam or spyware. In recent years their focus has shifted more toward spyware. FTC enforcement is based on three principles: the computer belongs to the consumer; disclosure can’t be buried in a EULA; and software must be reasonably removable. These seem sensible to me. She recommends a consumer education website created by the FTC and other government agencies.

Third speaker is Justin Brookman of the New York Attorney General’s office. To them, consent is the biggest issue. He is skeptical of state spyware laws, saying they are often too narrow and require high level of intent to be proven for civil liability. Instead, they enforce based on laws against deceptive business practices and false advertising, and on trespass to chattels. They focus on the consumer experience, and don’t always need to dig very deeply into all of the technical details. He says music lyric sites are often spyware-laden. In one case, a screen saver came with a 188-page EULA, which mentioned the included adware on page 131. He raises the issue of when companies are responsible for what their “affiliates” do.

Final speaker of the first panel is Ari Schwartz of CDT, who runs the Anti-Spyware Coalition. ASC is a big coalition of public-interest groups, companies, and others to build consensus around a definition of spyware and principles for dealing with it. The definition problem is both harder and more important than you might think. The goal was to create a broadly accepted definition, to short-circuit debates about whether particular pieces of unpleasant software are or are not spyware. He says that many of the harms caused by software are well addressed by existing law (identity theft, extortion, corporate espionage, etc.), but general privacy invasions are not. In what looks like a recurring theme for the workshop, he talks about how spyware purveyors use intermediaries (“affiliates”) to create plausible deniability. He shows a hair-raising chain of emails obtained in discovery in an FTC case against Sanford Wallace and associates. This was apparently an extortion-type scheme, where extreme spyware was locked on to a user’s computer, and the antidote was sold to users for $30.

Question to the panel about what happens if the perpetrator is overseas. Eileen Harrington says that if there are money flows, they can freeze assets or sometimes get money repatriated for overseas. The FTC wants statutory changes to foster information exchange with other governments. Ari Schwartz says advertisers, ad agencies, and adware makers are mostly in the U.S. Distribution of software is sometimes from the U.S., sometimes from Eastern Europe, former Soviet Union, or Asia.

Q&A discussion of how spyware programs attack each other. Justin Brookman talks about a case where one spyware company sued another spyware company over this.

The second panel is on “motives, incentives, and causes”. It’s two engineers and two lawyers. First is Eric Allred, an engineer from Microsoft’s antispyware group. “Why is this going on? For the money.”

Eric talks about game programs that use spyware tactics to fight cheating code, e.g. the “warden” in World of Warcraft. He talks about products that check quality of service or performance provided by, e.g., network software, by tracking some behaviors. He thinks this is okay with adequate notice and consent.

He takes a poll of the room. Only a few people admit to having their machines infected by spyware – I’ll bet people are underreporting. Most people say that friends have caught spyware.

Second speaker is Markus Jakobsson, an engineer from Indiana University and RavenWhite. He is interested in phishing and pharming, and the means by which sites can gather information about you. As a demonstration, he says his home page tells you where you do your online banking.

He describes an experiment they did that simulated phishing against IU students. Lots of people fell for it. Interestingly, people with political views on the far left or far right were more likely to fall for it than people with more moderate views. The experimental subjects were really mad (but the experiment had proper institutional review board approval).

“My conclusion is that user education does not work.”

Third is Paul Ohm, a law professor at Colorado. He was previously a prosecutor at the DOJ. He talks about the “myth of the superuser”. (I would have said “superattacker”.) He argues that Internet crime policy is wrongly aimed to stop the superuser.

What happens? Congress writes prohibitions that are broad and vague. Prosecutors and civil litigants use the broad language to pursue novel theories. Innocent people get swept in.

He conjectures that most spyware purveyors aren’t technological superuser. In general, he argues that legislation should focus on non-superuser methods and harms.

He talks about the SPYBLOCK Act language, which bans certain actions, if done with certain bad intent. “The FBI agent stops reading after the list of actions.”

Fourth is Marc Rotenberg from EPIC. His talk is structured as a list of observations, presented in random order. I’ll repeat some of them here. (1) People tend to behave opportunistically online – extract information if you can. (2) “Spyware is a crime of architectural opportunity.” (3) Motivations for spyware: money, control, exploitation, investigation.

He argues that cookies are spyware. This is a controversial view. He argues for reimagining cookies or how users can control them.

Q&A session begins. Alex asks Paul Ohm whether it makes sense in the long run to focus on attackers who aren’t super, given that attackers can adapt. Paul says, first, that he hopes technologists will help stop the superattackers. (The myth of the super-defender?) He advocates a more incremental and adaptive approach to drafting the statutes; aim at the 80% case, then adjust every few years.

Question to Marc Rotenberg about what can be done about cookies. Marc says that originally cookies contained, legibly, the information they represented, such as your zip code. But before long cookies morphed into unique identifiers, opaque to the user. Eric Allred points out that the cookies can be strongly, cryptographically opaque to users.

The final session is on solutions. Ben Edelman speaks first. He shows a series of examples of unsavory practices, relating to installation without full consent and to revenue sources for adware.

He shows a scenario where a NetFlix popup ad appears when a user visits blockbuster.com. This happened through a series of intermediaries – seven HTTP redirects – to pop up the ad. Netflix paid LinkShare, LinkShare paid Azoogle, Azoogle paid MyGeek, and MyGeek paid DirectRevenue. He’s got lots of examples like this, from different mainstream ad services.

He shows an example of Google AdSense ads popping up in 180solutions adware popup windows. He says he found 4600+ URLs where this happened (as of last June).

Orin Kerr speaks next. “The purpose of my talk is to suggest that there are no good ways for the law to handle the spyware problem.” He suggests that technical solutions are a better idea. A pattern today: lawyers want to rely more on technical solutions, technologists want to rely more on law.

He says criminal law works best when the person being prosecuted is clearly evil, even to a juror who doesn’t understand much about what happened. He says that spyware purveyors more often operate in a hazy gray area – so criminal prosecution doesn’t look like the right tool.

He says civil suits by private parties may not work, because defendants don’t have deep enough pockets to make serious suits worthwhile.

He says civil suits by government (e.g., the FTC) may not work, because they have weaker investigative powers than criminal investigators, especially against fly-by-night companies.

It seems to me that his arguments mostly rely on the shady, elusive nature of spyware companies. Civil actions may work against large companies that portray themselves as legitimate. So they may have the benefit of driving spyware vendors underground, which could make it harder for them to sell to some advertisers.

Ira Rubinstein of Microsoft is next. His title is “Code Signing As a Spyware Solution”. He describes (the 64-bit version of) Windows Vista, which will require any kernel-mode software to be digitally signed. This is aimed to stop rootkits and other kernel-mode exploits. It sounds quite similar to AuthentiCode, Microsoft’s longstanding signing infrastructure for ActiveX controls.

Mark Miller of HP is the last speaker. His talk starts with an End-User Listening Agreement, in which everyone in the audience must agree that he can read our minds and redistribute what he learns. He says that we’re not concerned about this because it’s infeasible for him to install hostile code into our brains.

He points out that the Solitaire program has the power to read, analyze or transmit any data on the computer. Any other program can do the same. He argues that we need to obey the principle of least privilege. It seems to me that we already have all the tools to do this, but people don’t do it.

He shows an example of how to stop a browser from leaking your secrets, by either not letting it connect to the Net, or not letting it read any local files. But even a simple browser needs to do both. This is not a convincing demo.

In the Q&A, Ben Edelman recommend’s Eric Howes’s web site as a list of which antispyware tools are legit and which are bogus or dangerous.

Orin Kerr is asked whether we should just give up on using the law. He says no, we should use the law to chip away at the problem, but we shouldn’t expect it to solve the problem entirely. Justin Brookman challenges Orin, saying that civil subpoenia power seems to work for Justin’s group at the NY AG office. Orin backtracks slightly but sticks to his basic point that spyware vendors will adapt or evolve into forms more resistant to enforcement.

Alex asks Orin how law and technology might work together to attack the problem. Orin says he doesn’t see a grand solution, just incremental chipping away at the problem. Ira Rubinstein says that law can adjust incentives, to foster adoption of better technology approaches.

And our day draws to a close. All in all, it was a very interesting and thought-provoking discussion. I wish it had been longer – which I rarely say at the end of this kind of event.

Comments

  1. It seems to me that if we were to reinvent the browser to a different type of application that has no power to indiscriminately load and run any software at the whim of a webmaster or hacker, and is capable of doing searches and reading useful content of web sites, we could all have an easier time of it.

    What I mean is that if everyone was not so concerned about seeing all the whiz-bang-flash of the web sites we currently see, and we were just able to read the content we were looking for, (along with of course the supporting advertising along side that isn’t annoyingly blinking and flashing, scrolling, and running videos to distract us from why we are there), things would be a lot more pleasant. If we wanted to access special features of web sites we could then invoke preloaded safe applications we consciously and purposely loaded to execute that feature.

    Just imagine, we could get real work done on the web with a 56K modem in that kind of world. The web can be a great place for finding information, I use it for that ever day. Too much of my time on the web is spent researching security and spyware issues to help clients and keep myself safe. I also use it for entertainment occasionally. Maybe someone will create a secure browser that ignores all the junk and flashy do-dads and just let us get our work done on the web.

    Wow what a waste of bandwidth we experience today with all the multi-megabyte advertising we are forced to see. Think how fast our browsing experience would be if all the unwanted garbage was gone. If the browser was incapable of showing all the different types of content and environments to convey what we currently see (and compromise the security of our computer and data in the process) and just giving us what we searched for, it would be a safer world. Think of all the hard drive space we would save without all the unseen garbage being downloaded to our hard drives that we never even see, want or need. Then it just stays there until we forcibly remove it. I cringe every time I look at my registry and see all the garbage tweaks placed in there by the crap software forced on my machine by websites I have visited.

    Fortunately, as a retired computer professional, I have enough knowledge to find and remove most of the crap that worms its way past my firewall and anti-virus software. I have a long list of ad-monger, info-gathering web sites that I block completely at the firewall. Occasionally something I want to see is unreachable as it relays through one of those unsavory sites to get where I want to go, so that information is just looked up elsewhere.

    OH – Wow, would implementing what I suggest above put a bunch of incompetent and unscrupulous “programmers” out of work? Would hackers then have less of a playground to access? Would hackers then have to get a real job? Well maybe they could start new businesses creating some useful products or durable goods so we don’t have to buy from China. Then maybe the USA could regain a manufacturing base instead of mouldering away as a service based economy fading away to a third-rate world power.

    SW

  2. This kernel mode code signing sounds bad to me. I foresee two possibilities:
    1. Anyone can PGP sign some code and run it in kernel mode in Vista;
    2. You need to buy an expensive certificate from some kind of (central?) authority to do it.

    In case 1, it won’t stop another XCP rootkit fiasco — First4Internet will simply sign their code.
    In case 2, not only won’t it stop another XCP (First4Internet will pony up, then sign their code) but it will raise the barrier to entry for open source. You won’t be able to make a functioning kernel mode code module without having a certain amount of disposable income.

    This gives us a good idea of what Microsoft’s real aim is: make programming inherently expensive, and thereby kill open source. (They’ve as much as admitted such aims in e.g. the Halloween documents.) Code signing requirements will do so, but won’t do a thing about spyware and other malware (since all code signing does is prove the code was written or endorsed by a certain entity, and not modified since; that entity is not by any stretch guaranteed to be trustworthy).

    Of course, relatively little needs to run in kernel mode, so we wouldn’t feel huge effects (sysinternals points out you can make user-mode rootkits for Windows machines, for example); this would enable Microsoft to experience less opposition or resistance when they eventually propose requiring user-mode apps to be signed (or make unsigned ones run with reduced privileges). There goes all Windows open source, from mingw and to the Eclipse IDE to … (Sure, relatively wealthy developers may still ante up to gain their software entry into the Windows world; but it won’t truly be “open source” anymore, since a modified version won’t run on Windows unless the modifier also antes up. Similar to the way the TCPA could kill open source.)

    The effect on malware would be either insignificant, or to make malware the exclusive domain of the wealthy. To see this, consider the possibilities:
    * No certification authority — this won’t have any effect, even on open source.
    * Many certification authorities — this will harm open source and stop viruses and backdoor trojans written by lone antisocial nerds, but not malware with some funding behind it, including all spyware, anything like XCP, and things like Back Orifice with a political agenda behind it and therefore probably some amount of money willing to be spent once to get it loosed upon the world.
    * Certification basically requires the code be okayed by Microsoft — this will kill open source and stop quite a bit of malware, but will not stop all malware; the fact that windows antispy won’t recommend removal of certain spyware apps is proof that MS can’t be trusted to make the best choice for its users in this regard. It just means some of the spyware revenues will go to Microsoft! Likewise, Microsoft is feeling awfully DRM-friendly these days. How likely is it that Vista kernel-mode code signing won’t do a thing to stop the next XCP, but will stop the next sysinternals-like blog expose by making it impossible to make software like RootkitRevealer without being Microsoft or at least paying through the nose, and thus impossible for home computer sysadmins and shallow-pocketed independent researchers to use such software without buying an expensive, shrinkwrapped, MS-approved product that may be designed to turn a blind eye to a rootkit if it comes from someone who has paid MS enough for the privilege?