April 26, 2024

Counterfeits, Trojan Horses, and shady distributors

Last Friday, the New York Times published an article about counterfeit Cisco products that have been sold as if they were genuine and are widely used throughout the U.S. government.  The article also raised the concern that these counterfeits could well be engineered with malicious intent, but that this appears not to have been the case. There was an immediate Slashdot thread as well, but a number of issues are still worth commenting on.

First things first: the facts, as best we understand them.  The New York Times reports that approximately 3500 counterfeit Cisco components (worth $3.5M) have been discovered as a result of a two-year FBI investigation.  A Cisco spokesman is quoted saying that they found “no evidence of re-engineering.”  In other words, we’re talking about faithful knock-offs of legitimate products.

If you go to the FBI’s unclassified PowerPoint presentation (dated January 11, 2008), you’ll see all the actual information.  This is a fascinating read.  For starters, let’s talk about the cost.  The slides claim you can get a counterfeit router for approximately 1/6 the cost of a genuine router.  (You can do similarly well buying used gear on eBay.)  The counterfeit gear looks an awful lot like the genuine article.  Detecting differences here is as difficult as detecting counterfeit money, counterfeit Rolex watches, or counterfeit signatures from sports stars.  Given the apparent discrepancy between component cost and street value, we should be no more surprised to find knock-off Cisco gear than we are to find knock-off everything else.

Counterfeit vs. Original Cisco line card

It’s claimed that these counterfeits are built to lower manufacturing standards than the original equipment, causing higher failure rates. One even caught fire due to a faulty power supply.  Likewise, the fakers are making stupid errors, like building multiple components with the same MAC address.  (MAC addresses, by design, are meant to be unique – no two ever the same.)

The really interesting story is all about the supply chain. Consider how you might buy yourself a new Mac.  You could go to your local Apple store.  Or you could get it from any of a variety of other stores, who in turn may have gotten it from Apple directly or may have gone through a distributor.  Apparently, for Cisco gear, it’s much more complicated than that.  The U.S. government buys from “approved” vendors, who might then buy from multiple tiers of sub-contractors.  In one case, one person bought shady gear from eBay and resold it to the government, moving a total of $1M in gear before he was caught.  In a more complicated case, Lockheed Martin won a bid for a U.S. Navy project.  They contracted with an unauthorized Cisco reseller who in turn contracted with somebody else, who used a sub-contractor, who then directly shipped the counterfeit gear to the Navy. (The slides say that $250K worth of counterfeit gear was sold; duplicate serial numbers were discovered.)

Why is this happening?  The Government wants to save money, so they look for contractors who can give them the best price, and their contracts allow for subcontracts, direct third-party shipping, and so forth.  There is no serious vetting of this supply chain by either Cisco or the government. Apparently, Cisco doesn’t do direct sales except for high-end, specialized gear.  You’d think Cisco would follow the lead of the airline industry, among others, and cut out the distributors to keep the profit for themselves.

Okay, on to the speculation.  Both the New York Times and the FBI presentation concern themselves with Trojan Horses.  Even though there’s no evidence that any of this counterfeit gear was actually malicious, the weak controls in the supply chain make it awfully easy for such compromised gear to be sold into sensitive parts of the government, raising all the obvious concerns.

Consider a recent paper by U. Illinois’s Sam King et al. where they built a “malicious processor”.  The idea is pretty clever.  You send along a “secret knock” (e.g., a network packet with a particular header) which triggers a sensor that enables “shadow code” to start running alongside the real operating system.  The Illinois team built shadow code that compromised the Linux login program, adding a backdoor password.  After the backdoor was tripped, it would disable the shadow code, thus going back to “normal” operation.

The military is awfully worried about this sort of threat, as well they should be.  For that matter, so are voting machine critics. It’s awfully easy for “stealth” malicious behavior to exist in legitimate systems, regardless of how carefully you might analyze or test it. Ken Thompson’s classic paper, Reflections on Trusting Trust, shows how he designed a clever Trojan Horse for Unix.  [Edit: it’s unclear that it ever got released into the wild.]

Okay everybody, let’s put on our evil hats.  If your goal was to get a Trojan Horse router into a sensitive military environment, how would you do it and how would it behave?  Clearly, the weak supply chain is an excellent vector for getting the gear into place.  Given the resources of a nation-state intelligence agency, you could afford to buy genuine Cisco parts and modify them, rather than using low-cost, counterfeit gear.  Nobody would detect you; you wouldn’t screw up and ship multiple boxes with the same serial number.

How will you implement your Trojan Horse logic?  Pretty much any gear you’ll ever find of any modest complexity will have software running inside it.  Even line cards have embedded processors of some sort.  For all that hardware, there’s software, and that’s what you’d go to install your logic bomb.  The increasing use of FPGAs in industrial designs means you could also “rewire” those parts to behave arbitrarily, much like the Illinois hack; you’d really want to get a hold of the original VHDL “source code”, leveraging your aforementioned spying prowess, to simplify the design and implementation of your malicious behavior.  Hacking the raw netlists (the FPGA-equivalent of machine code) would be possible, but would be far more painful. [See Sidebar.]

What sort of behavior would you build in?  The New York Times raises the idea of a kill switch.  I send your router a magic packet and it dies.  That’s too easy.  How about I send your router a magic packet, it then forwards it on to all of its peers, repeatedly, and then they all die a few seconds later?  That’s a pretty good denial of service attack (nevermind a plot device that was the basis of a popular science fiction television series). Alternatively, following the Illinois idea, we could imagine that the magic packet turns on a monitoring feature, allowing our intelligence agency to gather all kinds of information, reconfigure the router, and so forth.  If they don’t want to generate extra traffic, which might be detected, they could instead weaken the encryption of a VPN tunnel, perhaps publishing the session key through a subliminal channel of some sort, acquiring the ciphertext through “other” means.

In summary, it’s probably a good thing, from the perspective of the U.S. military, to discover that their supply chain is allowing counterfeit gear into production.  This will help them clean up the supply chain, and will also provide an extra push to consider just how much they trust the sources of their equipment to ship clean software and hardware.

[Sidebar: Xilinx supports a notion of “encrypting” a netlist.  Broadly speaking, the idea behind the technology is to encrypt the description of your FPGA configuration with a crypto key, such that anybody who reads the file out of your board gets encrypted garbage.  However, the FPGA has the key material to decrypt the configuration and then initialize itself normally.  This sort of technology is meant to serve an anti-piracy / anti-reverse-engineering purpose.  It could ostensibly also serve an anti-Trojan Horse purpose, although at that point it’s really no more or less secure, semantically, than Microsoft’s Authenticode.  This technology, more broadly, is also an active research area (see, for example, Roy et al.’s EPIC: Ending Piracy of Integrated Circuits).  Again, if we’ve got a nation-state intelligence service tampering with the system, none of this is going to provide meaningful protection for the end-user against Trojan Horses.]

Economics of Eavesdropping For Pay

Following up on Andrew’s post about eavesdropping as a profit center for telecom companies, let’s take a quick look at the economics of eavesdropping for money. We’ll assume for the sake of argument that (1) telecom (i.e. transporting bits) is a commodity so competition forces providers to sell it essentially at cost, (2) the government wants to engage in certain eavesdropping and/or data mining that requires cooperation from telecom providers, (3) cooperation is optional for each provider, and (4) the government is willing to pay providers to cooperate.

A few caveats are in order. First, we’re not talking about situations, such as traditional law enforcement eavesdropping pursuant to a warrant, where the provider is compelled to cooperate. Providers will cooperate in those situations, as they should. We’re only talking about additional eavesdropping where the providers can choose whether to cooperate. Second, we don’t care whether the government pays for cooperation or threatens retaliation for non-cooperation – either way the provider ends up with more money if it cooperates. Finally, we’re assuming that the hypothetical surveillance or data mining program, and the providers’ participation in it, is lawful; otherwise the law will (eventually) stop it. With those caveats out of the way, let the analysis begin.

Suppose a provider charges each customer an amount P for telecom service. The provider makes minimal profit at price P, because by assumption telecom is a commodity. The government offers to pay the provider an amount E per customer if the provider allows surveillance. The provider has two choices: accept the payment and offer service with surveillance at a price of P-E, or refuse the payment and offer reduced-surveillance service at price P. A rational provider will do whatever it thinks its customers prefer: Would typical customers rather save E, or would they rather avoid surveillance?

In this scenario, surveillance isn’t actually a profit center for the provider – the payment, if accepted, gets passed on to customers as a price discount. The provider is just an intermediary; the customers are actually deciding.

But of course the government won’t allow each customer to make an individual decision whether to allow surveillance – then the bad guys could pay extra to avoid being watched. If enough customers prefer for whatever reason to avoid surveillance (at a cost of E), then some provider will emerge to serve them. So the government will have to set E large enough that the number of customers who would refuse the payment is not large enough to support even one provider. This implies a decent-sized value for E.

But there’s another possibility. Suppose a provider claims to be refusing the payment, but secretly accepts the payment and allows surveillance of its customers. If customers fall for the lie, then the provider can change P while pocketing the government payment E. Now surveillance is a profit center for the provider, as long as customers don’t catch on.

If customers know that producers might be lying, savvy customers will discount a producer’s claim to be refusing the payments. So the premium customers are willing to pay for (claims of) avoiding surveillance will be smaller, and government can buy more surveillance more cheaply.

The incentives here get pretty interesting. Government benefits by undermining providers’ credibility, as that lowers the price government has to pay for surveillance. Providers who are cooperating with the government want to undermine their fellow providers’ credibility, thereby making customers less likely to buy from surveillance-resisting providers. Providers who claim, truthfully or not, to be be refusing surveillance want to pick fights with the government, making it look less likely that they’re cooperating with the government on surveillance.

If government wants to use surveillance, why doesn’t it require providers to cooperate? That’s a political question that deserves a post of its own.

Eavesdropping as a Telecom Profit Center

In 1980 AT&T was a powerful institution with a lucrative monopoly on transporting long-distance voice communications, but forbidden by law from permitting the government to eavesdrop without a warrant. Then in 1981 Judge Greene took its voice monopoly away, and in the 1980s and 90s the Internet ate the rest of its lunch. By 1996, Nicholas Negroponte wrote what many others also foresaw: “Shipping bits will be a crummy business. Transporting voice will be even worse. By 2020 … competition will render bandwidth a commodity of the worst kind, with no margins and no real basis for charging anything.

During the 1980s and 90s, AT&T cleverly got out of any business except shipping commodity bits: in 1981 it (was forced to) split off its regional phone companies; in 1996 it (voluntarily) split off its equipment-making arm as Lucent Technologies; in 2000-2001 it sold off its Wireless division to raise cash. Now AT&T long-distance bit-shipping is just a division of the former SBC, renamed AT&T.

What profit centers are left in shipping commodity bits? The United States Government spends 44 billion dollars a year on its spy agencies. It’s very plausible that the NSA is willing to pay $100 million or more for a phone/internet company to install a secret room where the NSA can spy on all the communications that pass through. A lawsuit by the EFF alleges such a room, and its existence was implicitly confirmed by the Director of National Intelligence in an interview with the El Paso Times. We know the NSA spends at least $200 million a year on information-technology outsourcing and some of this goes to phone companies such as Verizon.

Therefore, if it’s true that AT&T has such a secret room, then it may be simply that this is the only way AT&T knows how to make money off of shipping bits: it sells to the government all the information that passes through. Furthermore, economics tells us that in a commodity market, if one vendor is able to lower its price below cost, then other vendors must follow unless they also are able to make up the difference somehow. That is, there will be substantial economic pressure on all the other telecoms to accept the government’s money in exchange for access to everybody’s mail, Google searches, and phone calls.

In the end, it could be that the phone companies that cooperated with the NSA did so not for reasons of patriotism, or because their arms were twisted, but because the NSA came with a checkbook. Taking the NSA’s money may be the only remaining profit center in bit-shipping.

AT&T Explains Guilt by Association

According to government documents studied by The New York Times, the FBI asked several phone companies to analyze phone-call patterns of Americans using a technology called “communities of interest”. Verizon refused, saying that it didn’t have any such technology. AT&T, famously, did not refuse.

What is the “communities of interest” technology? It’s spelled out very clearly in a 2001 research paper from AT&T itself, entitled “Communities of Interest” (by C. Cortes, D. Pregibon, and C. Volinsky). They use high-tech data-mining algorithms to scan through the huge daily logs of every call made on the AT&T network; then they use sophisticated algorithms to analyze the connections between phone numbers: who is talking to whom? The paper literally uses the term “Guilt by Association” to describe what they’re looking for: what phone numbers are in contact with other numbers that are in contact with the bad guys?

When this research was done, back in the last century, the bad guys where people who wanted to rip off AT&T by making fraudulent credit-card calls. (Remember, back in the last century, intercontinental long-distance voice communication actually cost money!) But it’s easy to see how the FBI could use this to chase down anyone who talked to anyone who talked to a terrorist. Or even to a “terrorist.”

Here are a couple of representative diagrams from the paper:

Fig. 4. Guilt by association – what is the shortest path to a fraudulent node?

Fig. 5. A guilt by association plot. Circular nodes correspond to wireless service accounts while rectangular nodes are conventional land line accounts. Shaded nodes have been previously labeled as fraudulent by network security associates.

Email Protected by 4th Amendment, Court Says

The Sixth Circuit Court of Appeals ruled yesterday, in Warshak v. U.S., that people have a reasonable expectation of privacy in their email, so that the government needs a search warrant or similar process to access it. The Court’s decision was swayed by amicus briefs submitted by EFF and a group of law professors.

When Alice sends an email to Bob, the email will be stored, for a while at least, on an email server run by Bob’s email provider. Depending on how Bob uses email, the message may sit on the server just until Bob’s computer picks up mail (which happens every few minutes when Bob is online), or Bob may store his long-term email archive on the server. Either way the server, which is typically run by Bob’s ISP, will have a copy of the email and will have the ability to access its contents.

The key question in Warshak was whether, notwithstanding the ISP’s ability to read his mail, Bob still has a reasonable expectation of privacy in the email. This matters because certain Fourth Amendment protections apply where there is a reasonable expectation of privacy. The government had used a certain kind of order authorized by the Stored Communications Act to compel Warshak’s ISP to turn over Warshak’s email without notifying Warshak. Warshak argued that that was improper and the government should have been required to get a search warrant.

The key to the Court’s ruling is an analogy, offered by the amici, between email and phone calls. The phone company has the ability to listen to your calls, but courts ruled long ago that there is a reasonable expectation of privacy in the content of phone calls, so that the government cannot eavesdrop on the content of calls without a warrant. The Court accepted that email is like a phone call, for privacy purposes at least, and the ruling essentially followed from this analogy.

This is not a general ruling that warrants are required to access electronic records held by third parties. The Court’s reasoning depended on the particular attributes of email, and even on the way these particular ISPs handled email. If the ISP’s employees regularly looked at customer email in the ordinary course of business, or if there was a written agreement giving the ISP broad latitude to look at email, the Court might have found differently. Warshak had a reasonable expectation of privacy in his email, but you might not. (Randy Picker has an interesting commentary on Warshak in relation to online records held by third parties.)

Interestingly, the Court drew a line between inspection of email by computer programs, such as virus or spam checkers, versus inspection by a person. The Court found that automated analysis of email did not erode the reasonable expectation of privacy, but routine manual inspection of email would erode it.

Pragmatically, a ruling like this is only possible because email has become a routine part of life for so many people. The analogy to phone calls, and the unquestioned assumption that people value the privacy of email, are both easy for judges who have gotten used to the idea of email. Ten years ago this could not have happened. Ten years from now it will seem obvious.

Orin Kerr, who is expert in this area of the law, thinks this ruling is at higher than usual risk of being invalidated on appeal. That may be the case. But it seems to me that the long-term trend is toward treating email like phone calls, because that is how people think of it. The government may win this battle on appeal, but they’re likely to lose this point in the long run.