November 23, 2024

Why did anybody believe Haystack?

Haystack, a hyped technology that claimed to help political dissidents hide their Internet traffic from their governments, has been pulled by its promoters after independent researchers got a chance to study it and found severe problems.

This should come as a surprise to nobody. Haystack exhibited the warning signs of security snake oil: the flamboyant, self-promoting front man; the extravagant security claims; the super-sophisticated secret formula that cannot be disclosed; the avoidance of independent evaluation. What’s most interesting to me is that many in the media, and some in Washington, believed the Haystack hype, despite the apparent lack of evidence that Haystack would actually protect dissidents.

Now come the recriminations.

Jillian York summarizes the depressing line of adulatory press stories about Haystack and its front man, Austin Heap.

Evgeny Morozov at Foreign Affairs, who has been skeptical of Haystack from the beginning, calls several Internet commentators (Zittrain, Palfrey, and Zuckerman) “irresponsible” for failing to criticize Haystack earlier. Certainly, Z, P, and Z could have raised questions about the rush to hype Haystack. But the tech policy world is brimming with overhyped claims, and it’s too much to expect pundits to denounce them all. Furthermore, although Z, P, and Z know a lot about the Internet, they don’t have the expertise to evaluate the technical question of whether Haystack users can be tracked — even assuming the evidence had been available.

Nancy Scola, at TechPresident, offers a more depressing take, implying that it’s virtually impossible for reporters to cover technology responsibly.

It takes real work for reporters and editors to vet tech stories; it’s not enough to fact check quotes, figures, and events. Even “seeing a copy [of the product],” as York puts it, isn’t enough. Projects like Haystack need to be checked-out by technologists in the know, and I’d argue the before the recent rise of techno-advocates like, say, Clay Johnson or Tom Lee, there weren’t obvious knowledgeable sources for even dedicated reporters to call to help them make sense of something like Haystack, on deadline and in English.

Note the weasel-word “obvious” in the last sentence — it’s not that qualified experts don’t exist, it’s just that, in Scola’s take, reporters can’t be bothered to find out who they are.

I don’t think things are as bad as Scola implies. We need to remember that the majority of tech reporters didn’t hype Haystack. Non-expert reporters should have known to be wary about Haystack, just based on healthy journalistic skepticism about bold claims made without evidence. I’ll bet that many of the more savvy reporters shied away from Haystack stories for just this reason. The problem is that the few who did not got undeserved attention.

[Update (Tue 14 Sept 2010): Nancy Scola responds, saying that her point was that reporters’ incentives are to avoid checking up too much on enticing-if-true stories such as Haystack. Fair enough. I didn’t mean to imply that she condoned this state of affairs, just that she was pointing out its existence.]

It’s Time for India to Face its E-Voting Problem

The unjustified arrest of Indian e-voting researcher Hari Prasad, while an ordeal for Prasad and his family, and an embarrassment to the Indian authorities, has at least helped to focus attention on India’s risky electronic voting machines (EVMs).

Sadly, the Election Commission of India, which oversees the country’s elections, is still sticking to its position that the machines are “perfect” and “fully tamperproof”, despite evidence to the contrary including convincing peer-reviewed research by Prasad and colleagues, not to mention the common-sense fact that no affordable electronic device can ever hope to be perfect or tamperproof. The Election Commission can no longer plausibly deny that EVM vulnerabilities exist. The time has come for India to have an honest, public conversation about how it votes.

The starting point for this discussion must be to recognize the vulnerabilities of EVMs. Like paper ballots, the ballots stored in an EVM are subject to tampering during and after the election, unless they are monitored carefully. But EVMs, unlike paper ballots, are also subject to tampering before the election, perhaps months or years in advance. Indeed, for many EVMs these pre-election vulnerabilities are the most serious problem.

So which voting system should India use? That’s a question for the nation to decide based on its own circumstances, but it appears there is no simple answer. The EVMs have problems, and old-fashioned paper ballots have their own problems. Despite noisy claims to the contrary from various sides, showing that one is imperfect does not prove that the other must be used. Most importantly, the debate must recognize that there are more than two approaches — for example, most U.S. jurisdictions are now moving to systems that combine paper and electronics, such as precinct-count optical scan systems in which the voter marks a paper ballot that is immediately read by an electronic scanner. Whether a similar system would work well for India remains an open question, but there are many options, including new approaches that haven’t been invented yet, and India will need to do some serious analysis to figure out what is best.

To find the best voting system for India, the Election Commission will need all of the help it can get from India’s academic and technical communities. It will especially need help from people like Hari Prasad. Getting Prasad out of jail and back to work in his lab would not only serve justice — which should be reason enough to free him — but would also serve the voters of India, who deserve a better voting system than they have.

My Experiment with "Digital Drugs"

The latest scare meme is “digital drugs” or “i-dosing”, in which kids listen to audio tracks that supposedly induce altered mental states. Concerned adults fear that these “digital drugs” may be a gateway to harder (i.e., actual) drugs. Rumors are circulating among some kids: “I heard it was like some weird demons and stuff through an iPod“. In a way, it’s a perfect storm of scare memes, involving (1) “drugs”, (2) the Internet, and (3) kids listening to freaky music.

When I heard about these “digital drugs”, I naturally had to try them, in the interest of science.

(All joking aside, I only did this because I knew it was safe and legal. I don’t like to mess with my brain. I rely on my brain to make my living. Without my brain, I’d be … a zombie, I guess.)

I downloaded a “digital drug” track, donned good headphones, lay down on my bed, closed my eyes, blanked my mind, and pressed “play”. What I heard was a kind of droning noise, accompanied by a soft background hiss. It was not unlike the sound of a turboprop airplane during post-takeoff ascent, with two droning engines and the soft hiss of a ventilation fan. This went on for about fifteen minutes, with the drone changing pitch every now and then. That was it.

Did this alter my consciousness? Not really. If anything, fifteen minutes of partial sensory deprivation (eyes closed, hearing nothing but droning and hissing) might have put me in a mild meditative state, but frankly I could have reached that state more easily without the infernal droning, just by lying still and blanking my mind.

Afterward I did some web surfing to try to figure out why people think these sounds might affect the brain. To the extent there is any science at all behind “digital drugs”, it involves playing sounds of slightly different frequencies into your two ears, thereby supposedly setting up a low-frequency oscillation in the auditory centers of your brain, which will supposedly interact with your brain waves that operate at a very similar frequency. This theory could be hooey for all I know, but it sounds kind of science-ish so somebody might believe it. I can tell you for sure that it didn’t work on me.

So, kids: don’t do digital drugs. They’re a waste of time. And if you don’t turn down the volume, you might actually damage your hearing.

Identifying Trends that Drive Technology

I’m trying to compile a list of major technological and societal trends that influence U.S. computing research. Here’s my initial list. Please post your own suggestions!

  • Ubiquitous connectivity, and thus true mobility
  • Massive computational capability available to everyone, through the cloud
  • Exponentially increasing data volumes – from ubiquitous sensors, from higher-volume sensors (digital imagers everywhere!), and from the creation of all information in digital form – has led to a torrent of data which must be transferred, stored, and mined: “data to knowledge to action”
  • Social computing – the way people interact has been transformed; the data we have from and about people is transforming
  • All transactions (from purchasing to banking to voting to health) are online, creating the need for dramatic improvements in privacy and security
  • Cybercrime
  • The end of single-processor performance increases, and thus the need for parallelism to increase performance in operating systems and productivity applications, not just high-end applications; also power issues
  • Asymmetric threats, need for surveillance, reconnaissance
  • Globalization – of innovation, of consumption, of workforce
  • Pressing national and global challenges: climate change, education, energy / sustainability, health care (these replace the cold war)

What’s on your list? Please post below!

[cross-posted from CCC Blog]

The Stock-market Flash Crash: Attack, Bug, or Gamesmanship?

Andrew wrote last week about the stock market’s May 6 “flash crash”, and whether it might have been caused by a denial-of-service attack. He points to a detailed analysis by nanex.com that unpacks what happened and postulates a DoS attack as a likely cause. The nanex analysis is interesting and suggestive, but I see the situation as more complicated and even more interesting.

Before diving in, two important caveats: First, I don’t have access to raw data about what happened in the market that day, so I will accept the facts as posited by nanex. If nanex’s description is wrong or incomplete, my analysis won’t be right. Second, I am not a lawyer and am not making any claims about what is lawful or unlawful. With that out of the way …

Here’s a short version of what happened, based on the nanex data:
(1) Some market participants sent a large number of quote requests to the New York Stock Exchange (NYSE) computers.
(2) The NYSE normally puts outgoing price quotes into a queue before they are sent out. Because of the high rate of requests, this queue backed up, so that some quotes took a (relatively) long time to be sent out.
(3) A quote lists a price and a time. The NYSE determined the price at the time the quote was put into the queue, and timestamped each quote at the time it left the queue. When the queues backed up, these quotes would be “stale”, in the sense that they had an old, no-longer-accurate price — but their timestamps made them look like up-to-date quotes.
(4) These anomalous quotes confused other market participants, who falsely concluded that a stock’s price on the NYSE differed from its price on other exchanges. This misinformation destabilized the market.
(5) The faster a stock’s price changed, the more out-of-kilter the NYSE quotes would be. So instability bred more instability, and the market dropped precipitously.

The first thing to notice here is that (assuming nanex has the facts right) there appears to have been a bug in the NYSE’s system. If a quote goes out with price P and time T, recipients will assume that the price was P at time T. But the NYSE system apparently generated the price at one time (on entry to the queue) and the timestamp at another time (on exit from the queue). This is wrong: the timestamp should have been generated at the same time as the price.

But notice that this kind of bug won’t cause much trouble under normal conditions, when the queue is short so that the timestamp discrepancy is small. The problem might not have be noticed in normal operation, and might not be caught in testing, unless the testing procedure takes pains to create a long queue and to check for the consistency of timestamps with prices. This looks like the kind of bug that developers dread, where the problem only manifests under unusual conditions, when the system is under a certain kind of strain. This kind of bug is an accident waiting to happen.

To see how the accident might develop and be exploited, let’s consider the behavior of three imaginary people, Alice, Bob, and Claire.

Alice knows the NYSE has this timestamping bug. She knows that if the bug triggers and the NYSE starts issuing dodgy quotes, she can make a lot of money by exploiting the fact that she is the only market participant who has an accurate view of reality. Exploiting the others’ ignorance of real market conditions—and making a ton of money—is just a matter of technique.

Alice acts to exploit her knowledge, deliberately triggering the NYSE bug by flooding the NYSE with quote requests. The nanex analysis implies that this is probably what happened on May 6. Alice’s behavior is ethically questionable, if not illegal. But, the nanex analysis notwithstanding, deliberate triggering of the bug is not the only possibility.

Bob also knows about the bug, but he doesn’t go as far as Alice. Bob programs his systems to exploit the error condition if it happens, but he does nothing to cause the condition. He just waits. If the error condition happens naturally, he will exploit it, but he’ll take care not to cause it himself. This is ethically superior to a deliberate attack (and might be more defensible legally).

(Exercise for readers: Is it ethical for Bob to deliberately refrain from reporting the bug?)

Claire doesn’t know that the NYSE has a bug, but she is a very careful programmer, so she writes code that watches other systems for anomalous behavior and ignores systems that seem to be misbehaving. When the flash crash occurs, Claire’s code detects the dodgy NYSE quotes and ignores them. Claire makes a lot of money, because she is one of the few market participants who are not fooled by the bad quotes. Claire is ethically blameless — her virtuous programming was rewarded. But Claire’s trading behavior might look a lot like Alice’s and Bob’s, so an investigator might suspect Claire of unethical or illegal behavior.

Notice that even if there are no Alices or Bobs, but only virtuous Claires, the market might still have a flash crash and people might make a lot of money from it, even in the absence of a denial-of-service attack or indeed of any unethical behavior. The flood of quote requests that trigged the queue backup might have been caused by another bug somewhere, or by an unforeseen interaction between different systems. Only careful investigation will be able to untangle the causes and figure out who is to blame.

If the nanex analysis is at all correct, it has sobering implications. Financial markets are complex, and when we inject complex, buggy software into them, problems are likely to result. The May flash crash won’t be the last time a financial market gyrates due to software problems.