November 25, 2024

Tracking Your Every Move: iPhone Retains Extensive Location History

Today, Pete Warden and Alasdair Allan revealed that Apple’s iPhone maintains an apparently indefinite log of its location history. To show the data available, they produced and demoed an application called iPhone Tracker for plotting these locations on a map. The application allows you to replay your movements, displaying your precise location at any point in time when you had your phone. Their open-source application works with the GSM (AT&T) version of the iPhone, but I added changes to their code that allow it to work with the CDMA (Verizon) version of the phone as well.

When you sync your iPhone with your computer, iTunes automatically creates a complete backup of the phone to your machine. This backup contains any new content, contacts, and applications that were modified or downloaded since your last sync. Beginning with iOS 4, this backup also included is a SQLite database containing tables named ‘CellLocation’, ‘CdmaCellLocaton’ and ‘WifiLocation’. These correspond to the GSM, CDMA and WiFi variants of location information. Each of these tables contains latitude and longitude data along with timestamps. These tables also contain additional fields that appear largely unused on the CDMA iPhone that I used for testing — including altitude, speed, confidence, “HorizontalAccuracy,” and “VerticalAccuracy.”

Interestingly, the WifiLocation table contains the MAC address of each WiFi network node you have connected to, along with an estimated latitude/longitude. The WifiLocation table in our two-month old CDMA iPhone contains over 53,000 distinct MAC addresses, suggesting that this data is stored not just for networks your device connects to but for every network your phone was aware of (i.e. the network at the Starbucks you walked by — but didn’t connect to).

Location information persists across devices, including upgrades from the iPhone 3GS to iPhone 4, which appears to be a function of the migration process. It is important to note that you must have physical access to the synced machine (i.e. your laptop) in order to access the synced location logs. Malicious code running on the iPhone presumably could also access this data.

Not only was it unclear that the iPhone is storing this data, but the rationale behind storing it remains a mystery. To the best of my knowledge, Apple has not disclosed that this type or quantity of information is being stored. Although Apple does not appear to be currently using this information, we’re curious about the rationale for storing it. In theory, Apple could combine WiFi MAC addresses and GPS locations, creating a highly accurate geolocation service.

The exact implications for mobile security (along with forensics and law enforcement) will be important to watch. What is most surprising is that this granularity of information is being stored at such a large scale on such a mainstream device.

Why seals can't secure elections

Over the last few weeks, I’ve described the chaotic attempts of the State of New Jersey to come up with tamper-indicating seals and a seal use protocol to secure its voting machines.

A seal use protocol can allow the seal user to gain some assurance that the sealed material has not been tampered with. But here is the critical problem with using seals in elections: Who is the seal user that needs this assurance? It is not just election officials: it is the citizenry.

Democratic elections present a uniquely difficult set of problems to be solved by a security protocol. In particular, the ballot box or voting machine contains votes that may throw the government out of office. Therefore, it’s not just the government—that is, election officials—that need evidence that no tampering has occurred, it’s the public and the candidates. The election officials (representing the government) have a conflict of interest; corrupt election officials may hire corrupt seal inspectors, or deliberately hire incompetent inspectors, or deliberately fail to train them. Even if the public officials who run the elections are not at all corrupt, the democratic process requires sufficient transparency that the public (and the losing candidates) can be convinced that the process was fair.

In the late 19th century, after widespread, pervasive, and long-lasting fraud by election officials, democracies such as Australia and the United States implemented election protocols in an attempt to solve this problem. The struggle to achieve fair elections lasted for decades and was hard-fought.

A typical 1890s solution works as follows: At the beginning of election day, in the polling place, the ballot box is opened so that representatives of all political parties can see for themselves that it is empty (and does not contain hidden compartments). Then the ballot box is closed, and voting begins. The witnesses from all parties remain near the ballot box all day, so they can see that no one opens it and no one stuffs it. The box has a mechanism that rings a bell whenever a ballot is inserted, to alert the witnesses. At the close of the polls, the ballot box is opened, and the ballots are counted in the presence of witnesses.

drawing of 1890 polling place
(From Elements of Civil Government by Alexander L. Peterman, 1891)

In principle, then, there is no single person or entity that needs to be trusted: the parties watch each other. And this protocol needs no seals at all!

Democratic elections pose difficult problems not just for security protocols in general, but for seal use protocols in particular. Consider the use of tamper-evident security seals in an election where a ballot box is to be protected by seals while it is transported and stored by election officials out of the sight of witnesses. A good protocol for the use of seals requires that seals be chosen with care and deliberation, and that inspectors have substantial and lengthy training on each kind of seal they are supposed to inspect. Without trained inspectors, it is all too easy for an attacker to remove and replace the seal without likelihood of detection.

Consider an audit or recount of a ballot box, days or weeks after an election. It reappears to the presence of witnesses from the political parties from its custody in the hands of election officials. The tamper evident seals are inspected and removed—but by whom?

If elections are to be conducted by the same principles of transparency established over a century ago, the rationale for the selection of particular security seals must be made transparent to the public, to the candidates, and to the political parties. Witnesses from the parties and from the public must be able to receive training on detection of tampering of those particular seals. There must be (the possibility of) public debate and discussion over the effectiveness of these physical security protocols.

It is not clear that this is practical. To my knowledge, such transparency in seal use protocols has never been attempted.


Bibliographic citation for the research paper behind this whole series of posts:
Security Seals On Voting Machines: A Case Study, by Andrew W. Appel. Accepted for publication, ACM Transactions on Information and System Security (TISSEC), 2011.

Web Browsers and Comodo Disclose A Successful Certificate Authority Attack, Perhaps From Iran

Today, the public learned of a previously undisclosed compromise of a trusted Certificate Authority — one of the entities that issues certificates attesting to the identity of “secure” web sites. Last week, Comodo quietly issued a command via its certificate revocation servers designed to tell browsers to no longer accept 9 certificates. This is fairly standard practice, and certificates are occasionally revoked for a variety of reasons. What was unique about this case is that it was followed by very rapid updates by several browser manufacturers (Chrome, Firefox, and Internet Explorer) that go above and beyond the normal revocation process and hard-code the certificates into a “do not trust” list. Mozilla went so far as to force this update in as the final change to their source code before shipping their major new release, Firefox 4, yesterday.

This implied that the certificates were likely malicious, and may even been used by a third-party to impersonate secure sites. Here at Freedom to Tinker we have explored the several ways in which the current browser security system is prone to vulnerabilities [1, 2, 3, 4, 5, 6]

Clearly, something exceptional happened behind the scenes. Security hacker Jacob Appelbaum did some fantastic detective work using the EFF’s SSL Observatory data and discovered that all of the certificates in question originated from Comodo — perhaps from one of the many affiliated companies that issues certificates under Comodo’s authority via their “Registration Authority” (RA) program. Evidently, someone had figured out how to successfully attack Comodo or one of their RAs, or had colluded with them in getting some invalid certs.

Appelbaum’s pressure helped motivate a statement from Mozilla [and a follow-up] and a statement from Microsoft that gave a little bit more detail. This afternoon, Comodo released more details about the incident, including the domains for which rogue certificates were issued: mail.google.com, www.google.com, login.yahoo.com (3 certs), login.skype.com, addons.mozilla.org, login.live.com, and “global trustee”. Comodo noted:

“The attack came from several IP addresses, but mainly from Iran.”

and

“this was likely to be a state-driven attack”

[Update: Someone claiming to be the hacker has posted a manifesto. At least one security researcher finds the claim to be credible.]

It is clear that the domains in question are among the most attractive targets for someone who wants to surveil the personal communications of many people online by inserting themselves as a “man in the middle.” I don’t have any deep insights on Comodo’s analysis of the attack’s origins, but it seems plausible. (I should note that although Comodo claims that only one of the certificates was “seen live on the Internet”, their mechanism for detecting this relies on the attacker not taking some basic precautions that would be well within the means and expertise of someone executing this attack.) [update: Jacob Appelbaum also noted this, and has explained the technical details]

What does this tell us about the current security model for web browsing? This instance highlights a few issues:

  • Too many entities have CA powers: As the SSL Observatory project helped demonstrate, there are thousands of entities in the world that have the ability to issue certificates. Some of these are trusted directly by browsers, and others inherit their authority. We don’t even know who many of them are, because such delegation of authority — either via “subordinate certificates” or via “registration authorities” — is not publicly disclosed. The more of these entities exist, the more vulnerabilities exist.
  • The current system does not limit damage: Any entity that can issue a certificate can issue a certificate for any domain in the world. That means that a vulnerability at one point is a vulnerability for all.
  • Governments are a threat: All the major web browsers currently trust many government agencies as Certificate Authorities. This often includes places like Tunisia, Turkey, UAE, and China, which some argue are jurisdictions hostile to free speech. Hardware products exist and are marketed explicitly for government surveillance via a “man in the middle” attack.
  • Comodo in particular has a bad track record with their RA program: The structure of “Registration Authorities” has led to poor or nonexistant validation in the past, but Mozilla and the other browsers have so far refused to take any action to remove Comodo or put them on probation.
  • We need to step up efforts on a fix: Obviously the current state of affairs is not ideal. As Appelbaum notes, efforts like DANE, CAA, HASTLS, and Monkeysphere deserve our attention.

[Update: Jacob Appelbaum has posted his response to the Comodo announcement, criticizing some aspects of their response and the browsers.]

[Update: A few more details are revealed in this Comodo blog post, including the fact that “an attacker obtained the username and password of a Comodo Trusted Partner in Southern Europe.”]

[Update: Mozilla has made Appelbaum’s bug report publicly visible, along with the back-and-forth between him and Mozilla before the situation was made public. There are also some interesting details in the Mozilla bug report that tracked the patch for the certificate blacklist. There is yet another bug that contains the actual certificates that were issued. Discussion about what Mozilla should do in further response to this incident is proceeding in the newsgroup dev.mozilla.security.policy.]

[Update: I talked about this issue on Marketplace Tech Report.]

You may also be interested in an October 22, 2010 event that we hosted on the policy and technology issues related to online trust (streaming video available):


Seals on NJ voting machines, March 2009

During the NJ voting-machines trial, both Roger Johnston and I showed different ways of removing all the seals from voting machines and putting them back without evidence of tampering. The significance of this is that one can then install fraudulent vote-stealing software in the computer.

The State responded by switching seals yet again, right in the middle of the trial! They replaced the white vinyl adhesive-tape seal with a red tape seal that has an extremely soft and sticky adhesive. In addition, they proposed something really wacky: they would squirt superglue into the blue padlock seal and into the security screw cap.

Nothing better illustrates the State’s “band-aid approach, where serious security vulnerabilities can be covered over with ad hoc fixes” (as Roger characterizes it) than this. The superglue will interfere with the ability for election workers to (legitimately) remove the seal to maintain the machine. The superglue will make it more difficult to detect tampering, because it goes on in such a variable way that the inspector doesn’t know what’s supposed to be “normal.” And the extremely soft adhesive on the tape seal is extremely difficult to clean up, when the election worker (legitimately) removes it to maintain the machine. Of course, one must clean up all the old adhesive before resealing the voting machine.

Furthermore, Roger demonstrated for the Court that all these seals can still be defeated, with or without the superglue. Here’s the judge’s summary of his testimony about all these seals:


New Jersey is proposing to add six different kinds of seals in nine different locations to the voting machines. Johnston testified he has never witnessed this many seals applied to a system. At most, Johnston has seen three seals applied to high-level security applications such as nuclear safeguards. According to Johnston, there is recognition among security professionals that the effective use of a seal requires an extensive use protocol. Thus, it becomes impractical to have a large number of seals installed and inspected. He testified that the use of a large number of seals substantially decreases security, because attention cannot be focused for a very long time on any one of the seals, and it requires a great deal more complexity for these seal-use protocols and for training.

For more details and pictures of these seals, see “Seal Regime #4” in this paper.

What an expert on seals has to say

During the New Jersey voting machines lawsuit, the State defendants tried first one set of security seals and then another in their vain attempts to show that the ROM chips containing vote-counting software could be protected against fraudulent replacement. After one or two rounds of this, Plaintiffs engaged Dr. Roger Johnston, an expert on physical security and tamper-indicating seals, to testify about New Jersey’s insecure use of seals.

In his day job, Roger is a scientist at the Argonne National Laboratory, working to secure (among other things) our nation’s shipments of nuclear materials. He has many years of experience in the scientific study of security seals and their use protocols, as well as physical security in general. In this trial he testified in his private capacity, pro bono.

He wrote an expert report in which he analyzed the State’s proposed use of seals to secure voting machines (what I am calling “Seal Regime #2” and “Seal Regime #3”). For some of these seals, he and his team of technicians have much slicker techniques to defeat these seals than I was able to come up with. Roger chooses not to describe the methods in detail, but he has prepared this report for the public.

What I found most instructive about Roger’s report (including in version he has released publicly) is that he explains that you can’t get security just by looking at the individual seal. Instead, you must consider the entire seal use protocol:


Seal use protocols are the formal and informal procedures for choosing, procuring, transporting, storing, securing, assigning, installing, inspecting, removing, and destroying seals. Other components of a seal use protocol include procedures for securely keeping track of seal serial numbers, and the training provided to seal installers and inspectors. The procedures for how to inspect the object or container onto which seals are applied is another aspect of a seal use protocol. Seals and a tamper-detection program are no better than the seal use protocols that are in place.

He explains that inspecting seals for evidence of tampering is not at all straightforward. Inspection often requires removing the seal—for example, when you pull off an adhesive-tape seal that’s been tampered with, it behaves differently than one that’s undisturbed. A thorough inspection may involve comparing the seal with microphotographs of the same seal taken just after it was originally applied.

For each different seal that’s used, one can develop a training program for the seal inspectors. Because the state proposed to use four different kinds of seals, it would need four different sets training materials. Training all the workers who would inspect the State’s 10,000 voting machines would be quite expensive. With all those seals, just the seal inspections themselves would cost over $100,000 per election.

His report also discusses “security culture.”


“Security culture” is the official and unofficial, formal and informal behaviors, attitudes, perceptions, strategies, rules, policies, and practices associated with security. There is a consensus among security experts that a healthy security culture is required for effective security….

A healthy security culture is one in which security is integrated into everyday work, management, planning, thinking, rules, policies, and risk management; where security is considered as a key issue at all employee levels (and not just an afterthought); where security is a proactive, rather than reactive activity; where security measures are carefully defined, and frequently reviewed and studied; where security experts are involved in choosing and reviewing security strategies, practices, and products; where the organization constantly seeks proactively to understand vulnerabilities and provide countermeasures; where input on potential security problems are eagerly considered from any quarter; and where wishful thinking and denial is deliberately avoided in regards to threats, risks, adversaries, vulnerabilities, and the insider threat….

Throughout his deposition … Mr. Giles [Director of the NJ Division of Elections] indicates that he believes good physical security requires a kind of band-aid approach, where serious security vulnerabilities can be covered over with ad hoc fixes or the equivalent of software patches. Nothing could be further from the truth.

Roger Johnston’s testimony about the importance of seal use protocols—as considered separately from the individual seals themselves—made a strong impression on the judge: in the remedy that the Court ordered, seal use protocols as defined by Dr. Johnston played a prominent role.