April 20, 2024

Book Club Discussion: Code, Chapters 3 and 4

This week in Book Club we read Chapters 3 and 4 of Lawrence Lessig’s Code, and Other Laws of Cyberspace.

Now it’s time to discuss the chapters. I’m especially eager to see discussion of this week’s chapters, and not just general reflections on the book as a whole.

You can chime in by entering a comment below.

For next week, we’ll read Chapter 5.

Comments

  1. Your dog is chasing it’s tail !

  2. I think the basic point is that policy can be implemented in or enforced by architectural decisions…i.e. code.

    The Provost at Chicago apparently said, “Let there be openness”, and the network designers allowed plug-in connection to the network. The administrator at Harvard said, “Let there be authentication”, and the network designers made it so. Wonderful thing, engineering.

    I believe that there is a great value in examining how a policy decision can be enforced by a particular code/architecture model, and what its implications are for the real world. This is the contribution of Lessig’s book.

    But for all models, the devil is in the details. As I read through the book, I find myself wondering about the technical problems associated with trying to actually implement policy. Real code has a tendency to (1) not be implemented correctly, (2) implement more restrictions than were intended, (3) break in strange ways allowing users to circumvent it. All of these have analogues in real space (law enforcement agents may be poorly trained, prejudiced, open to bribery, etc.)

    Perhaps I am taking the code-is-law analogy too literally or simplistically. However, I do feel rather strongly that when considering policy built into code, it is important to consider failure modes at the same time.

  3. Kevin, I understand that different network configurations result in more or less information being revealed about the originators of traffic. The question is, should the choice of configuration within a particular network be considered an architectural decision or a policy decision?

    Presumably we can agree that when network administrators block traffic from anonymous relays, they’re making a pure policy decision–in most cases, not one line of code changes anywhere in their system. In the other cases you refer to, it may be that an administrator has to install and configure an extra system component to get from one configuration to another. But is it useful to call this extra step an architectural decision? Is there anything about it that makes it fundamentally different from the policy decision to block traffic from certain IP addresses known to function as anonymous relays?

  4. Kevin Kenny says

    There is a difference in architecture, though, between fully-authenticated
    networks (typified by many intranets in large corporations) and more-or-less
    open ones (typified by, say, Lessig’s example of the University of Chicago).
    That difference has to do with the number of hurdles that a computer has
    to clear to be granted access to the network.

    As I understand it, Harvard at the time had no DHCP (Dynamic Host Configuration Protocol) on its network; to interoperate with the network, a host had to have an address assigned in advance by the network administrators. Failure to use an assigned address would not only mean
    that a user’s computer would not function correctly; it would also mean that
    it would compromise the function of the network for other hosts on
    its segment. The fact that the host address was arbitrarily assigned
    by a central authority meant that all packets originating and terminating
    on that host could be identified and tied to the host’s owner.

    By contrast, the Chicago approach allowed a newly-acquired computer
    simply to join the net by being plugged in, announcing its existence,
    and being assigned its address by a program. This approach reduces
    administrative costs, because the addresses don’t have to be managed
    actively. It also improves network availability, since accidental
    host address collisions are commonplace in non-DHCP networks.

    A third approach, which became popular with the advent of wireless
    networking combines some of both worlds. “MAC” filtering (MAC
    stands for Media Access Control) uses a unique code assigned to
    each network card by the hardware manufacturer as a token to
    the gatekeeper; only if the machine is on an approved list can the
    machine connect. Once the machine is approved, though, normal
    access control mechanisms come into play and the machine can
    be assigned its address dynamically. The assignment can, of course,
    be logged, allowing the packets to be traced. (This is the scheme
    used in most corporate networks.)

    Instead (or in addition to) MAC filtering, one can use encryption as
    the guardian of a network. It is possible to architect one’s network
    so that any machine in possission of the appropriate keys can be
    connected, without being identified further. Many home networks
    are configured this way.

    In this way, I’d contend that Lessig is right when he states that
    architectural decisions can pave the way for more or less anonymity.
    That said, though, the local network is not the usual layer where
    anonymity is assailed. Rather, anonymous speech on the network
    in 2005 is compromised primarily because it has been so abused.
    “Anonymous remailers/reposters” and the like still exist; but because
    they have been used so much by spammers and trolls,
    their traffic is widely blocked. Nevertheless, pseudonymous
    communications are alive and well; thie site, for instance,
    doesn’t require me to credential myself before posting this
    message. (Indeed, I *have* used my legal name with it.
    But how would you know if I hadn’t told you?)

  5. (this post has comments on chapter 3)

    First, mr. Simon of course has a point in saying that architecture and functionality are different aspects of a network. Authentication, for instance, has to be implemented at least in the form of indentifying computer addresses within a (limited) address space, even if the network is anonymous in the way of user identification.

    Second, it seems as if mr. Lessig puts up a straw man and shoot it when he discusses the assertion of cyberspace being impevious to government. In my opinion, this is not so much a property of any cyberspace, but of a group of users of that space. We see that in any non-anonymous top layer of cyberspace (he calls it the net95+) people develop tools to make part of the traffic anonymous, encrypted, or both.

    This feature is NOT essential to many users, in fact, many people enjoy some degree of indentification. For instance, if your access provider offers an e-mail address or space on a server – accessible by smtp, pop or ftp – and you choose to use it, this authentication is obviously a “good thing”.

    It is even very well possible that people who happily use these authenticated features would appreciate some form of unmonitored traffic possibility. But since bandwith and traffic are part of today’s pricing schemes, this usually amounts to encrypted traffic. This is still governed by some limitations (policies ore features of the architecture) of the network.

    It is obvious to me that any pressure on anonymity will trigger efforts to create new evasions for indentification. The Bittorrent scheme springs to mind. But this is a property of users, not of networks.

  6. Since nobody’s commenting (have I scared everyone away?), I’ll continue with my evisceration. Today’s theme is Lessig’s brutal misuse of the word, “architecture”. In Chapter 3, Lessig contrasts the networking services offered by two different universities during the 1990’s–Harvard’s, which required host authentication, and Chicago’s, which didn’t–and claims that the “design” or “architecture” of the two embodied and promoted different “values”. In Chapter 4, he gives a really, really distorted history of the Internet, and a badly jumbled tour of various authentication technologies that can layer on top of it. His theme, basically, is that anonymity is good–but also, that “architecture” can foster or limit anonymity, and that “architectures of control” can be layered onto architectures like the Internet to undermine its supposed original anonymity.

    Forget, for a moment, Lessig’s historical and technological cluelessness. The simple fact is that he has no idea whether the “architecture” of the University of Chicago’s computer network differed by one bit of code from the “architecture” of the network at Harvard. For all he knows, the code was exactly the same–except that administrators at Chicago turned off their authentication functionality, whereas Harvard’s didn’t. In other words, if the word “architecture” has any meaning at all, and if that meaning has anything whatsoever to do with “code”, in the computer networking setting, then whether a network is anonymous or not has absolutely nothing whatsoever to do with its “architecture”.

    Now, granted, if a network is explicitly built without any authentication functionality, then authentication code has to be added if authentication is to work. But somebody makes that decision, just as somebody makes the decision to turn it on or off if it’s already there. In other words, whether the network requires authentication or not is determined by the network administrator’s policy, and only enforced through features of the “architecture”.

    The appropriate analogy is to the locks on the university’s doors: if they’re not there, then the doors are unlocked by default. If the university decides to install locks, then it can send someone around to lock some or all of the doors at 9PM, or midnight, or not at all. Now, one could, I suppose, describe the presence or absence of locks as a feature of the campus “architecture”. But such a description would be jarring, given the ease with which locks can be installed or removed–let alone locked or unlocked. A much more natural phrasing would be that the university installs or removes, and locks or unlocks, the locks on campus doors according to a campus policy. The same is true–even more so, given the malleability of code–for the installation and management of security software in a network.

    Why, then, does Lessig misleadingly insist that “architecture” determines the degree of anonymity in a network? I believe it’s essentially a crude rhetorical trick. The issue of anonymity in computer networks raises all sorts of complicated questions about security, liability, economics, politics, and much more. Who pays for an anonymous network? How does an anonymous network deal with misuses–for crime, for instance? What happens if a user’s anonymity is breached by some means? And if the network is partially or completely non-anonymous, then what rules govern the distribution and use of users’ identity information? What happens if those rules aren’t followed? How are they enforced? A sincere discussion of all of these questions would yield few definitive answers, and lots of complicated tradeoffs.

    But Lessig isn’t interested in sincere, inconclusive discussion–he’s an anonymity enthusiast, and he wants to rally people to the cause of making as many networks as possible as anonymous as possible. The “architecture” canard allows him to cast the debate in terms of freedom versus control–“architectures of control” enslave, and we must not be enslaved!–without actually having to address all the difficult questions that anonymity and authentication policies raise.