November 26, 2024

Book Club: Code and Other Laws of Cyberspace

[Index of all book club postings]
[RSS feed (book club postings)]
[RSS feed (book club comments, but not postings)]

Freedom to Tinker is hosting an online book club discussion of Lawrence Lessig’s book Code, and Other Laws of Cyberspace. Lessig has created a wiki (an online collaborative space) with the text of the book, and he is encouraging everyone to edit the wiki to help create a new edition of the book.

You can buy a paper version of the book or read it online for free.

We’ll read one or two chapters each week, and we’ll discuss what we read on the main Freedom to Tinker blog.

Below is the current reading schedule, which will be updated periodically.

Date Readings due Discussion
Friday June 10 Preface and Chapter 1 discuss
Friday June 17 Chapter 2 discuss
Friday June 24 Chapters 3 and 4

Virtually Unprotected

Today’s New York Times has a strongly worded editorial saying the U.S. is vulnerable to a devastating cyberattack, and national action is required.

We are indeed vulnerable to cyberattack, but this may not be our most serious unaddressed vulnerability. Is the threat of cyberattack more serious than, say, the threat of a physical attack on the natural gas distribution system? Probably not. Nonetheless, cyberattack is a serious enough problem to merit national attention.

As a participant in the Princeton Project on National Security, I have learned about national security planning; and it seems that the traditional governmental processes are ill-suited for addressing cyberthreats. The main reason is that national security processes result in plans for governmental action; but the cyberthreat problem can be solved only by private action. The cyber infrastructure is in private hands, and is designed to serve private ends. Government can’t easily change it.

Other critical infrastructures, such as the electric power system, are also in private hands, but they are more amenable to government influence for various reasons. The electric power system is operated by a relatively small number of companies; but the cyberinfrastructure is operated by many companies and by ordinary citizens. (The computer you are reading this on is part of the cyberinfrastructure.) The electric power industry has a longstanding, strong industry association focused on reliability; but the infotech industries are disorganized. The electric power industry has historically consisted of regulated monopolies accustomed to taking orders from government; but the infotech industry has been more freewheeling.

There are a few levers government could try to manipulate to get the private stewards of the cyberinfrastructure to change their behavior. But they don’t look promising. Mandating the use of certain security technologies is costly and may not improve security if people comply with the letter but not the spirit of the mandate. Changing liability rules is problematic, for reasons I have discussed previously (1, 2, 3). Using the government’s purchasing power to change producers’ incentives might help, but would have limited effect given the relatively small share of purchases made by the government.

To make things worse, our knowledge of how to secure the cyberinfrastructure is rudimentary. Improving the security of critical systems would be hugely expensive; and large improvements are probably impossible anyway given our current state of knowledge.

Probably the best thing government can do is to invest in research, in the hope that someday we will better understand how to secure systems at reasonable cost. That doesn’t solve the problem now, and doesn’t help much even five years from now; but it might do a lot of good in the longer term.

What is the government actually doing about cybersecurity research funding? Cutting it.

Book Club

I’m thinking about running a Freedom to Tinker book club. I would choose a book and invite readers to read it along with me, one chapter per week. I would post a reaction to each chapter (or I would ask another reader to do so), and a discussion would follow in the comments section. This would become a regular feature, with a posting every Friday (tentatively).

To start, I’m considering Larry Lessig’s book, Code, and Other Laws of Cyberspace. It’s not a new book, but it’s been hugely influential – perhaps too influential. Larry is now running a wiki, which started with the book’s text, and which will form the basis for a new edition. So we could contribute our ideas to the wiki and help to shape the next edition of the book.

Please let me know what you think of the book club idea in general, and about reading Code specifically.

Dissecting the Witty Worm

A clever new paper by Abhishek Kumar, Vern Paxson, and Nick Weaver analyzes the Witty Worm, which infected computers running certain security software in March 2004. By analyzing the spray of random packets Witty sent around the Internet, they managed to learn a huge amount about Witty’s spread, including exactly where the virus was injected into the net, and which sites might have been targeted specially by the attacker. They can track with some precision exactly who infected whom as Witty spread.

They did this using data from a “network telescope”. The “telescope” sits in a dark region of the Internet: a region containing 0.4% of all valid IP addresses but no real computers. The telescope records every single network packet that shows up in this dark region. Since there are no ordinary computers in the region, any packets that do show up must be sent in error, or sent to a randomly chosen address.

Witty, like many worms, spread by sending copies of itself to randomly chosen IP addresses. An infected machine would send 20,000 copies of the worm to random addresses, then do a little damage to the local machine, then send 20,000 more copies of the worm to random addresses, then do more local damage, and so on, until the local machine died. When one of the random packets happened to arrive at a vulnerable machine, that machine would be infected and would join the zombie army pumping out infection packets.

Whenever an infected machine happened to send an infection packet into the telescope’s space, the telescope would record the packet and the researchers could deduce that the source machine was infected. So they could figure out which machines were infected, and when each infection began.

Even better, they realized that infected machines were generating the sequence of “random” addresses to attack using something called a Linear Congruential PseudoRandom Number Generator, which is a special kind of deterministic procedure that is sometimes used to crank out a sequence of numbers that looks random, but isn’t really random in the sense that coin-flips are random. Indeed, a LCPRNG has the property that if you can observe its output, then you can predict which “random” numbers it will generate in the future, and you can even calculate which “random” numbers it generated in the past. Now here’s the cool part: the infection packets arriving at the telescope contained “random” addresses that were produced by a LCPRNG, so the researchers could reconstruct the exact state of the LCPRNG on each infected machine. And from that, they could reconstruct the exact sequence of infection attempts that each infected machine made.

Now they knew pretty much everything there was to know about the spread of the Witty worm. They could even reconstruct the detailed history of which machine infected which other machine, and when. This allowed them to trace the infection back to its initial source, “Patient Zero”, which operated from a particular IP address owned by a “European retail ISP”. They observed that Patient Zero did not follow the usual infection rules, meaning that it was running special code designed to launch the worm, apparently by spreading the worm to a “hit list” of machines suspected to be vulnerable. A cluster of machines on the hit list happened to be at a certain U.S. military installation, suggesting that the perpetrator had inside information that machines at that installation would be vulnerable.

The paper goes on from there, using the worm’s spread as a natural experiment on the behavior of the Internet. Researchers often fantasize about doing experiments where they launch packets from thousands of random sites on the Internet and measure the packets’ propagation, to learn about the Internet’s behavior. The worm caused many machines to send packets to the telescope, making it a kind of natural experiment that would have been very difficult to do directly. Lots of useful information about the Internet can be extracted from the infection packets, and the authors proceeded to deduce facts about the local area networks of the infected machines, how many disks they had, the speeds of various network bottlenecks, and even when each infected machine had last been rebooted before catching the worm.

This is not a world-changing paper, but it is a great example of what skilled computer scientists can do with a little bit of data and a lot of ingenuity.

On a New Server

This site is on the new server now, using WordPress. Please let me know, in the comments, if you see any problems.