November 24, 2024

Announcing the CITP Visitors for 2010-2011

We are delighted to announce the CITP visiting scholars, practitioners, and collaborators for the 2010-2011 academic year. The diverse group of leading thinkers represents CITP’s highly interdisciplinary interests. We are looking forward to their work at the center, and welcome them to the family. The short list is below, but you can see more description on the announcement page.

  • Ronaldo Lemos, Fundação Getulio Vargas Law School
  • Fengming Liu, Microsoft
  • Frank Pasquale, Seton Hall
  • Wendy Seltzer, Berkman Center
  • Susan Crawford, Cardozo Law School
  • Alex Halderman, University of Michigan
  • Joe Hall, UC Berkeley School of Information
  • Ron Hedges, Former Federal Magistrate Judge
  • Adrian Hong, Pegasus Project
  • Rebecca MacKinnon, New America Foundation
  • Philip Napoli, Fordham
  • W. Russell Neuman, University of Michigan
  • Steven Roosa, Reed Smith


My Experiment with "Digital Drugs"

The latest scare meme is “digital drugs” or “i-dosing”, in which kids listen to audio tracks that supposedly induce altered mental states. Concerned adults fear that these “digital drugs” may be a gateway to harder (i.e., actual) drugs. Rumors are circulating among some kids: “I heard it was like some weird demons and stuff through an iPod“. In a way, it’s a perfect storm of scare memes, involving (1) “drugs”, (2) the Internet, and (3) kids listening to freaky music.

When I heard about these “digital drugs”, I naturally had to try them, in the interest of science.

(All joking aside, I only did this because I knew it was safe and legal. I don’t like to mess with my brain. I rely on my brain to make my living. Without my brain, I’d be … a zombie, I guess.)

I downloaded a “digital drug” track, donned good headphones, lay down on my bed, closed my eyes, blanked my mind, and pressed “play”. What I heard was a kind of droning noise, accompanied by a soft background hiss. It was not unlike the sound of a turboprop airplane during post-takeoff ascent, with two droning engines and the soft hiss of a ventilation fan. This went on for about fifteen minutes, with the drone changing pitch every now and then. That was it.

Did this alter my consciousness? Not really. If anything, fifteen minutes of partial sensory deprivation (eyes closed, hearing nothing but droning and hissing) might have put me in a mild meditative state, but frankly I could have reached that state more easily without the infernal droning, just by lying still and blanking my mind.

Afterward I did some web surfing to try to figure out why people think these sounds might affect the brain. To the extent there is any science at all behind “digital drugs”, it involves playing sounds of slightly different frequencies into your two ears, thereby supposedly setting up a low-frequency oscillation in the auditory centers of your brain, which will supposedly interact with your brain waves that operate at a very similar frequency. This theory could be hooey for all I know, but it sounds kind of science-ish so somebody might believe it. I can tell you for sure that it didn’t work on me.

So, kids: don’t do digital drugs. They’re a waste of time. And if you don’t turn down the volume, you might actually damage your hearing.

Bilski and the Value of Experimentation

The Supreme Court’s long-awaited decision in Bilski v. Kappos brought closure to this particular patent prosecution, but not much clarity to the questions surrounding business method patents. The Court upheld the Federal Circuit’s conclusion that the claimed “procedure for instructing buyers and sellers how to protect against the risk of price fluctuations in a discrete section of the economy” was unpatentable, but threw out the “machine-or-transformation” test the lower court had used. In its place, the Court’s majority gave us a set of “clues” which future applicants, Sherlock Holmes-like, must use to discern the boundaries separating patentable processes from unpatentable “abstract ideas.”

The Court missed an opportunity to throw out “business method” patents, where a great many of these abstract ideas are currently claimed, and failed to address the abstraction of many software patents. Instead, Justice Kennedy’s majority seemed to go out of its way to avoid deciding even the questions presented, simultaneously appealing to the new technological demands of the “Information Age”

As numerous amicus briefs argue, the machine-or-transformation test would create uncertainty as to the patentability of software, advanced diagnostic medicine techniques, and inventions based on linear programming, data compression, and the manipulation of digital signals.

and yet re-ups the uncertainty on the same page:

It is important to emphasize that the Court today is not commenting on the patentability of any particular invention, let alone holding that any of the above-mentioned technologies from the Information Age should or should not receive patent protection.

The Court’s opinion dismisses the Federal Circuit’s brighter line test for “machine-or-transformation” in favor of hand-waving standards: a series of “clues,” “tools” and “guideposts” toward the unpatentable “abstract ideas.” While Kennedy notes that “This Age puts the possibility of innovation in the hands of more people,” his opinion leaves all of those people with new burdens of uncertainty — whether they seek patents or reject patent’s exclusivity but risk running into the patents of others. No wonder Justice Stevens, who concurs in the rejection of Bilski’s application but would have thrown business method patents out with it, calls the whole thing “less than pellucid.”

The one thing the meandering makes clear is that while the Supreme Court doesn’t like the Federal Circuit’s test (despite the Federal Circuit’s attempt to derive it from prior Supreme Court precedents), neither do the Supremes want to propose a new test of their own. The decision, like prior patent cases to reach the Supreme Court, points to larger structural problems: the lack of a diverse proving-ground for patent cases.

Since 1982, patent cases, unlike most other cases in our federal system, have all been appealed to one court, United States Court of Appeals for the Federal Circuit. Thus while copyright appeals, for example, are heard in the circuit court for the district in which they originate (one of twelve regional circuits), all patent appeals are funneled to the Federal Circuit. And while its judges may be persuaded by other circuits’ opinions, one circuit is not bound to follow its fellows, and may “split” on legal questions. Consolidation in the Federal Circuit deprives the Supreme Court of such “circuit splits” in patent law. At most, it may have dissents from the Federal Circuit’s panel or en banc decision. If it doesn’t like the test of the Federal Circuit, the Supreme Court has no other appellate court to which to turn.

Circuit splits are good for judicial decisionmaking. They permit experimentation and dialogue around difficult points of law. (The Supreme Court hears fewer than 5% of the cases appealed to it, but is twice as likely to take cases presenting inter-circuit splits.) Like the states in the federal system, multiple circuits provide a “laboratory [to] try novel social and economic experiments.” Diverse judges examining the same law, as presented in differing circumstances, can analyze it from different angles (and differing policy perspectives). The Supreme Court considering an issue ripened by the analysis of several courts is more likely to find a test it can support, less likely to have to craft one from scratch or abjure the task. At the cost of temporary non-uniformity, we may get empirical evidence toward better interpretation.

At a time when “harmonization” is pushed as justification for treaties(and a uniform ratcheting-up of intellectual property regimes), the Bilski opinion suggests again that uniformity is overrated, especially if it’s uniform murk.

Identifying Trends that Drive Technology

I’m trying to compile a list of major technological and societal trends that influence U.S. computing research. Here’s my initial list. Please post your own suggestions!

  • Ubiquitous connectivity, and thus true mobility
  • Massive computational capability available to everyone, through the cloud
  • Exponentially increasing data volumes – from ubiquitous sensors, from higher-volume sensors (digital imagers everywhere!), and from the creation of all information in digital form – has led to a torrent of data which must be transferred, stored, and mined: “data to knowledge to action”
  • Social computing – the way people interact has been transformed; the data we have from and about people is transforming
  • All transactions (from purchasing to banking to voting to health) are online, creating the need for dramatic improvements in privacy and security
  • Cybercrime
  • The end of single-processor performance increases, and thus the need for parallelism to increase performance in operating systems and productivity applications, not just high-end applications; also power issues
  • Asymmetric threats, need for surveillance, reconnaissance
  • Globalization – of innovation, of consumption, of workforce
  • Pressing national and global challenges: climate change, education, energy / sustainability, health care (these replace the cold war)

What’s on your list? Please post below!

[cross-posted from CCC Blog]

The Stock-market Flash Crash: Attack, Bug, or Gamesmanship?

Andrew wrote last week about the stock market’s May 6 “flash crash”, and whether it might have been caused by a denial-of-service attack. He points to a detailed analysis by nanex.com that unpacks what happened and postulates a DoS attack as a likely cause. The nanex analysis is interesting and suggestive, but I see the situation as more complicated and even more interesting.

Before diving in, two important caveats: First, I don’t have access to raw data about what happened in the market that day, so I will accept the facts as posited by nanex. If nanex’s description is wrong or incomplete, my analysis won’t be right. Second, I am not a lawyer and am not making any claims about what is lawful or unlawful. With that out of the way …

Here’s a short version of what happened, based on the nanex data:
(1) Some market participants sent a large number of quote requests to the New York Stock Exchange (NYSE) computers.
(2) The NYSE normally puts outgoing price quotes into a queue before they are sent out. Because of the high rate of requests, this queue backed up, so that some quotes took a (relatively) long time to be sent out.
(3) A quote lists a price and a time. The NYSE determined the price at the time the quote was put into the queue, and timestamped each quote at the time it left the queue. When the queues backed up, these quotes would be “stale”, in the sense that they had an old, no-longer-accurate price — but their timestamps made them look like up-to-date quotes.
(4) These anomalous quotes confused other market participants, who falsely concluded that a stock’s price on the NYSE differed from its price on other exchanges. This misinformation destabilized the market.
(5) The faster a stock’s price changed, the more out-of-kilter the NYSE quotes would be. So instability bred more instability, and the market dropped precipitously.

The first thing to notice here is that (assuming nanex has the facts right) there appears to have been a bug in the NYSE’s system. If a quote goes out with price P and time T, recipients will assume that the price was P at time T. But the NYSE system apparently generated the price at one time (on entry to the queue) and the timestamp at another time (on exit from the queue). This is wrong: the timestamp should have been generated at the same time as the price.

But notice that this kind of bug won’t cause much trouble under normal conditions, when the queue is short so that the timestamp discrepancy is small. The problem might not have be noticed in normal operation, and might not be caught in testing, unless the testing procedure takes pains to create a long queue and to check for the consistency of timestamps with prices. This looks like the kind of bug that developers dread, where the problem only manifests under unusual conditions, when the system is under a certain kind of strain. This kind of bug is an accident waiting to happen.

To see how the accident might develop and be exploited, let’s consider the behavior of three imaginary people, Alice, Bob, and Claire.

Alice knows the NYSE has this timestamping bug. She knows that if the bug triggers and the NYSE starts issuing dodgy quotes, she can make a lot of money by exploiting the fact that she is the only market participant who has an accurate view of reality. Exploiting the others’ ignorance of real market conditions—and making a ton of money—is just a matter of technique.

Alice acts to exploit her knowledge, deliberately triggering the NYSE bug by flooding the NYSE with quote requests. The nanex analysis implies that this is probably what happened on May 6. Alice’s behavior is ethically questionable, if not illegal. But, the nanex analysis notwithstanding, deliberate triggering of the bug is not the only possibility.

Bob also knows about the bug, but he doesn’t go as far as Alice. Bob programs his systems to exploit the error condition if it happens, but he does nothing to cause the condition. He just waits. If the error condition happens naturally, he will exploit it, but he’ll take care not to cause it himself. This is ethically superior to a deliberate attack (and might be more defensible legally).

(Exercise for readers: Is it ethical for Bob to deliberately refrain from reporting the bug?)

Claire doesn’t know that the NYSE has a bug, but she is a very careful programmer, so she writes code that watches other systems for anomalous behavior and ignores systems that seem to be misbehaving. When the flash crash occurs, Claire’s code detects the dodgy NYSE quotes and ignores them. Claire makes a lot of money, because she is one of the few market participants who are not fooled by the bad quotes. Claire is ethically blameless — her virtuous programming was rewarded. But Claire’s trading behavior might look a lot like Alice’s and Bob’s, so an investigator might suspect Claire of unethical or illegal behavior.

Notice that even if there are no Alices or Bobs, but only virtuous Claires, the market might still have a flash crash and people might make a lot of money from it, even in the absence of a denial-of-service attack or indeed of any unethical behavior. The flood of quote requests that trigged the queue backup might have been caused by another bug somewhere, or by an unforeseen interaction between different systems. Only careful investigation will be able to untangle the causes and figure out who is to blame.

If the nanex analysis is at all correct, it has sobering implications. Financial markets are complex, and when we inject complex, buggy software into them, problems are likely to result. The May flash crash won’t be the last time a financial market gyrates due to software problems.