In a recent article by Scahill and Begley, we learned that the CIA is interested in targeting Apple products. I largely agree with the quote from Steve Bellovin, that “spies gonna spy”, so of course they’re interested in targeting the platform that rides in the pockets of many of their intelligence collection targets. What could be a tastier platform for intelligence collection than a device with a microphone, cellular network connection, GPS, and a battery, which your targets willingly carry around in their pockets? Even better, your targets will spare you the trouble of recharging your spying device for you. Of course you target their iPhones! (And Androids. And Blackberries.)
To my mind, the real eyebrow raising moment was that the CIA is also allegedly targeting app developers through “whacking” Apple’s Xcode tool, presumably allowing all subsequent software shipped from the developer to the app store to contain some sort of malicious implant, which will then be distributed within that developer’s app. Nothing has been disclosed about how widespread these attacks are (if ever used at all), what developers might have been targeted, or how the implants might function.
This news will cause a goldrush of security researchers to download every app they can find and rummage around in them to see if they’ve got backdoors. Depending on how clever the CIA hackers were, and if they’ve actually deployed anything, it might still be hard to find these things. All the CIA really has to do is leave a latent vulnerability (e.g., a buffer overflow opportunity) that they happen to know how to exploit once an app is installed on a target’s phone. This gives the CIA the all-important deniability if somebody notices the vulnerability. (Alternately, these latent vulnerabilities could include some sort of cryptographic input verification so they can’t be exploited by random third parties, but such a mechanism would largely eliminate any deniability.)
Still, it’s not that easy. If, for example, our CIA hackers were to steal the developer’s code signing credentials, build a new version of the app, and upload it to the app store, the developer would inevitably notice (“hey, I didn’t upload a new build!”). Instead, the CIA hackers need to get into the developer’s machine, and into their legitimate compiled binaries, shipped legitimately to the app store and ultimately to their legitimate users. Let’s sort out how that might work.
If you were going after a single developer and you could break into their specific computer, you’d just overwrite files on their hard drive. Replace the compiler or build packaging tool with something evil. I’d probably put a backdoor / implant in one of the core libraries that are linked with every app. One juicy target would be analytics or advertising libraries, which regularly chit-chat on the network to get content and feed data back to their servers, so any network beaconing wouldn’t necessarily raise any suspicions. With maliciously backdoored apps in the field, legitimately installed on millions of phones, all you have to do next is exploit the backdoor.
But how do you deliver suitable malicious content to a limited number of target phones? Most advertising is delivered over unencrypted http connections. If you can do a man-in-the-middle attack on the phone network, the hotel WiFi, or whatever else, then you can inject your exploit as a replacement for a standard unit of advertising content. If successful, it can then pull down the rest of the attack package, perhaps exploiting other “zero-day” vulnerabilities to break out of the app sandbox and permanently ensconce itself on the target phone.
Still, there’s a risk that somebody scanning every app in the app store, which a large number of research groups are doing these days, might discover your backdoor. How do you avoid this? One possibility: if you happened to operate an advertising or analytics network, you’ve now got developers legitimately installing your library and doing business with you. Oh, was there an exploitable bug in our library? Sorry about that. We’ll fix it right away. (Again, the importance of deniability.)
Maybe that’s too extreme? Fine. Here’s another question: where do libraries come from? The answer to this question has evolved a huge amount in the past two decades. Way back when, libraries were either installed on the host computer alongside the rest of the OS development environment, or they were something you separately installed and compiled against. Today, you tend to have a simple declaration. For example, when I was building an Android app for a side project, I wanted to use Square’s open-source “Wire” library to have a compact implementation of protocol buffers. All I had to do was add one line to my build.gradle file:
compile 'com.squareup.wire:wire-runtime:1.5.1'
What happens after that? How does it find that library? I ran Wireshark while doing a build and saw connections to many places to fetch many things. Some were encrypted, others were plaintext, but it doesn’t really matter. That link in my build file says get version 1.5.1 of the library. There’s no cryptographic hash or any other way for me to know whether I’ve been fed the proper library that I requested. If a nation-state intelligence agency is capable of breaking into the network services that backstop this particular line in my build file, then they’ve got a devastating reach. They could feed tampered libraries to everybody or to just a specific developer.
What should we do about it? Without knowing the specifics of how these attacks are rolled out, it’s impossible to make a guess about whether these sorts of attacks are illegal. Certainly, broad-spectrum injection of “deniable” backdoors is a spectacularly dumb policy, allowing all sorts of miscreants to exploit these backdoors well beyond any narrow intent of their designers. Robert Graham suggests that the CIA wouldn’t go after broadly used apps like Angry Birds, but instead might try to go after an app nearer and dearer to the hearts and minds of specific high value targets. Unfortunately, there’s really no such thing as a terrorist-only app (“Hatebook”? “Jihadr”? “Kaboomchat”?), which means that any attack of this sort is likely to have significant collateral damage.
We can at least posit different ways of defending against these attacks. For example, the above line in my build file could replace the library’s version number with a cryptographic hash of some sort. The various app stores (iTunes Store, Play Store, etc.) could also wire in their own protections to detect one-off tampered versions of common libraries. But what if the backdoor is injected as a one-off in a single app’s core logic by a one-off tampered dev environment, which then goes through the app developer’s standard code obfuscation backend (e.g., Android’s use of Proguard)? How does an app store distinguish the result from a legitimate new feature in a legitimate new version?
Overall, in a world where developers cannot trust in their own dev tools (originally posited by Ken Thompson in his Turing Award lecture), we’ve got our work cut out for us. Technical countermeasures will help, but we’re also going to need to convince the world’s intelligence services (not just the CIA), and the government bodies that regulate them that these sorts of activities operate against their larger interests. This argument is essentially the same as the argument against building-in backdoors, as we’ve discussed with “export-grade” crypto or any other activity to weaken everybody’s security. Yes you can. No you shouldn’t.
You don’t have to own or operate an advertising or analytics product, you just have to be able to own their machines when you want to.
It would be nice if ads were served with a much-more-limited-functionality version of the standard browser kit, but that ain’t likely to happen.
I wholeheartedly agree with this article; but I find one little assumption you made that should be called into question.
“Certainly, broad-spectrum injection of ‘deniable’ backdoors is a spectacularly dumb policy, allowing all sorts of miscreants to exploit these backdoors well beyond any narrow intent of their designers.”
What makes you think that the designers (CIA for instance) have a “narrow intent” when they create back doors and or vulnerabilities in products?
Didn’t the lessons of Snowden reach home? The U.S. Government does NOT do anything with “narrow intent” their intents are clearly very very broad. They really don’t care if others can use the vulnerabilities so long as they can use them. And they don’t care about unintended consequences such as collateral damage to innocent users of these technologies. There intentions are even broader than any virus writer; their intent is not to target a narrow group of people (such as jihadists) but to target every single person in the world; ESPECIALLY anyone and everyone within the United States.
That is what Snowden made clear though he is not the only one that has been saying it.
Mike Perry and I discussed this problem in December in our talk at the Chaos Communication Congress.
https://media.ccc.de/browse/congress/2014/31c3_-_6240_-_en_-_saal_g_-_201412271400_-_reproducible_builds_-_mike_perry_-_seth_schoen_-_hans_steiner.html#video
I demonstrate how changing one bit in a binary can (re)introduce an exploitable memory corruption vulnerability, by messing up the bounds-checking logic in a C program. And we discuss a number of other aspects that readers here might find interesting.
Why would this be a surprise? Silicon Valley is ground zero for spy recruitment by many countries. Either hack the developers or recruit insiders.
Regarding libraries from advertising networks, it might be interesting to look at who owns them and what other things are owned by the same company…