October 13, 2024

The future of high school yearbooks

The Dallas Morning News recently ran a piece about how kids these days aren’t interested in buying physical, printed yearbooks. (Hat tip to my high school’s journalism teacher, who linked to it from our journalism alumni Facebook group.) Why spend $60 on a dead-trees yearbook when you can get everything you need on Facebook? My 20th high school reunion is coming up this fall, and I was the “head” photographer for my high school’s yearbook and newspaper, so this is a topic near and dear to my heart.

Let’s break down everything that a yearbook actually is and then think about how these features can and cannot be replicated in the digital world. A yearbook has:

  • higher-than-normal photographic quality (yearbook photographers hopefully own better camera equipment and know how to use their gear properly)
  • editors who do all kinds of useful things (sending photographers to events they want covered, selecting the best pictures for publication, captioning them, and indexing the people in them)
  • a physical artifact that people can pass around to their friends to mark up and personalize, and which will still be around years later

If you get rid of the physical yearbook, you’ve got all kinds of issues. Permanence is the big one. There’s nothing that my high school can do to delete my yearbook after it’s been published. Conversely, if high schools host their yearbooks on school-owned equipment, then they can and will fail over time. (Yes, I know you could run a crawler and make a copy, but I wouldn’t trust a typical high school’s IT department to build a site that will be around decades later.) To pick one example, my high school’s web site, when it first went online, had a nice alumni registry. Within a few years, it unceremoniously went away without warning.

Okay, what about Facebook? At this point, almost a third of my graduating class is on Facebook, and I’m sure the numbers are much higher for more recent classes. Some of my classmates are digging up old pictures, posting them, and tagging each other. With social networking as part of the yearbook process from the start, you can get some serious traction in replacing physical yearbooks. Yearbook editors and photography staff can still cover events, select good pictures, caption them, and index them. The social networking aspect covers some of the personalization and markup that we got by writing in each others’ yearbooks. That’s fun, but please somebody convince me that Facebook will be here ten or twenty years from now. Any business that doesn’t make money will eventually go out of business, and Facebook is no exception.

Aside from the permanence issue, is anything else lost by going to a Web 2.0 social networking non-printed yearbook? Censorship-happy high schools (and we all know what a problem that can be) will never allow a social network site that they control to have students’ genuine expressions of their distaste for all the things that rebellious youth like to complain about. Never mind that the school has a responsibility to maintain some measure of student privacy. Consequently, no high school would endorse the use of a social network that they couldn’t control and censor. I’m sure several of the people who wrote in my yearbook could have gotten in trouble if the things they wrote there were to have been raised before the school administration, yet those comments are the best part of my yearbook. Nothing takes you back quite as much as off-color commentary.

One significant lever that high school yearbooks have, which commercial publications like newspapers generally lack, is that they’re non-profit. If the yearbook financially breaks even, they’re doing a good job. (And, in the digital universe, the costs are perhaps lower. I personally shot hundreds of rolls of black&white film, processed them, and printed them, and we had many more photographers on our staff. My high school paid for all the film, paper, and photo-chemistry that we used. Now they just need computers, although those aren’t exactly cheap, either.) So what if they don’t print so many physical yearbooks? Sure, the yearbook staff can do a short, vanity press run, so they can enter competitions and maybe win something, but otherwise they can put out a PDF or pickle the bowdlerized social network’s contents down to a DVD-ROM and call it a day. That hopefully creates enough permanence. What about uncensored commentary? That’s probably going to have to happen outside of the yearbook context. Any high school student can sign up for a webmail account and keep all their email for years to come. (Unlike Facebook, the webmail companies seem to be making money.) Similarly, the ubiquity of digital point-and-shoot cameras ensures that students will have uncensored, personal, off-color memories.

[Sidebar: There’s a reality show on TV called “High School Reunion.” Last year, they reunited some people from my school’s class of 1987. I was in the class of 1989. Prior to the show airing, I was contacted by one of the producers, wanting to use some of my photographs in the show. She sent me a waiver that basically had me indemnifying them for their use of my work; of course, they weren’t offering to pay me anything. Really? No thanks. One of the interesting questions was whether my photos were even “my property” to which I could even give them permission to use. There were no contracts of any kind when I signed up to work on the yearbook. You could argue that the school retains an interest in the pictures, never mind the original subjects from whom we never got model releases. Our final contract said, in effect, that I represented that I took the pictures and had no problem with them using them, but I made no claims as to ownership, and they indemnified me against any issues that might arise.

Question for the legal minds here: I have three binders full of negatives from my high school years. I could well invest a week of my time, borrow a good scanner, and get the whole collection online and post it online, either on my own web site or on Facebook. Should I? Am I opening myself to legal liability?]

Sizing Up "Code" with 20/20 Hindsight

Code and Other Laws of Cyberspace, Larry Lessig’s seminal work on Internet regulation, turns ten years old this year. To mark the occassion, the online magazine Cato Unbound (full disclosure: I’m a Cato adjunct scholar) invited Lessig and three other prominent Internet scholars to weigh in on Code‘s legacy: what it got right, where it went wrong, and what implications it has for the future of Internet regulation.

The final chapter of Code was titled “What Declan Doesn’t Get,” a jab at libertarians like CNet’s Declan McCullagh who believed that government regulation of the Internet was likely to do more harm than good. It’s fitting, then, that Declan got to kick things off with an essay titled (what else?) “What Larry Didn’t Get.” There were responses from Jonathan Zittrain (largely praising Code) and my co-blogger Adam Thierer (mostly criticizing it), and the Lessig got the last word. I think each contributor will be posting a follow-up essay in the coming days.

My ideological sympathies are with Declan and Adam, but rather than pile on to their ideological critiques, I want to focus on some of the specific technical predictions Lessig made in Code. People tend to forget that in addition to describing some key theoretical insights about the nature of Internet regulation, Lessig also made some pretty specific predictions about how cyberspace would evolve in the early years of the 21st Century. I think that enough time has elapsed that we can now take a careful look at those predictions and see how they’ve panned out.

Lessig’s key empirical claim was that as the Internet became more oriented around commerce, its architecture would be transformed in ways that undermined free speech and privacy. He thought that e-commerce would require the use of increasingly sophisticated public-key infrastructure that would allow any two parties on the net to easily and transparently exchange credentials. And this, in turn, would make anonymous browsing much harder, undermining privacy and making the Internet easier to regulate.

This didn’t happen, although for a couple of years after the publication of Code, it looked like a real possibility. At the time, Microsoft was pushing a single sign-on service called Passport that could have been the foundation of the kind of client authentication facility Lessig feared. But then passport flopped. Consumers weren’t enthusiastic about entrusting their identities to Microsoft, and businesses found that lighter-weight authentication processes were sufficient for most transactions. By 2005 companies like eBay started dropping Passport from their sites. The service has been rebranded Windows Live ID and is still limping along, but no one seriously expects it to become the kind of comprehensive identity-management system Lessig feared.

Lessig concedes that he was “wrong about the particulars of those technologies,” but he points to the emergence of a new generation of surveillance technologies—IP geolocation, deep packet inspection, and cookies—as evidence that his broader thesis was correct. I could quibble about whether any of these are really new technologies. Lessig discusses cookies in Code, and the other two are straightforward extensions of technologies that existed a decade ago. But the more fundamental problem is that these examples don’t really support Lessig’s original thesis. Remember that Lessig’s prediction was that changes to Internet architecture—such as the introduction of robust client authentication to web browsers—would transform the previously anarchic network into one that’s more easily regulated. But that doesn’t describe these technologies at all. Cookies, DPI, and geo-location are all technologies that work with vanilla TCP/IP, using browser technologies that were widely deployed in 1999. Technological changes made cyberspace more susceptible to regulation without any changes to the Internet’s architecture.

Indeed, it’s hard to think of any policy or architectural change that could have forestalled the rise of these technologies. The web would be extremely inconvenient if we didn’t have something like cookies. The engineering constraints on backbone routers make roughly geographical IP assignment almost unavoidable, and if IP addresses are tied to geopgrahy it’s only a matter of time before someone builds a database of the mapping. Finally, any unencrypted networking protocol is susceptible to deep packet inspection. Short of mandating that all traffic be encrypted, no conceivable regulatory intervention could have prevented the development of DPI tools.

Of course, now that these technologies exist, we can have a debate about whether to regulate their use. But Lessig was making a much stronger claim in 1999: that the Internet’s architecture (and, therefore, its susceptibility to regulation) circa 2009 would be dramatically different depending on the choices policymakers made in 1999. I think we can now say that this wasn’t right. Or, at least, the technologies he points to now aren’t good examples of that thesis.

It seems to me that the Internet is rather less malleable than Lessig imagined a decade ago. We would have gotten more or less the Internet we got regardless of what Congress or the FCC did over the last decade. And therefore, Lessig’s urgent call to action—his argument that we must act in 1999 to ensure that we have the kind of Internet we want in 2009—was misguided. In general, it works pretty well to wait until new technologies emerge and then debate whether to regulate them after the fact, rather than trying to regulate preemptively to shape the kinds of technologies that are developed.

As I wrote a few months back, I think Jonathan Zittrain’s The Future of the Internet and How to Stop It makes the same kind of mistake Lessig made a decade ago: overestimating regulators’ ability to shape the evolution of new technologies and underestimating the robustness of open platforms. The evolution of technology is mostly shaped by engineering and economic constraints. Government policies can sometimes force new technologies underground, but regulators rarely have the kind of fine-grained control they would need to promote “generative” technologies over sterile ones, any more than they could have stopped the emergence of cookies or DPI if they’d made different policy choices a decade ago.

A Modest Proposal: Three-Strikes for Print

Yesterday the French parliament adopted a proposal to create a “three-strikes” system that would kick people off the Internet if they are accused of copyright infringement three times.

This is such a good idea that it should be applied to other media as well. Here is my modest proposal to extend three-strikes to the medium of print, that is, to words on paper.

My proposed system is simplicity itself. The government sets up a registry of accused infringers. Anybody can send a complaint to the registry, asserting that someone is infringing their copyright in the print medium. If the government registry receives three complaints about a person, that person is banned for a year from using print.

As in the Internet case, the ban applies to both reading and writing, and to all uses of print, including informal ones. In short, a banned person may not write or read anything for a year.

A few naysayers may argue that print bans might be hard to enforce, and that banning communication based on mere accusations of wrongdoing raises some minor issues of due process and free speech. But if those issues don’t trouble us in the Internet setting, why should they trouble us here?

Yes, if banned from using print, some students will be unable to do their school work, some adults will face minor inconvenience in their daily lives, and a few troublemakers will not be allowed to participate in — or even listen to — political debate. Maybe they’ll think more carefully the next time, before allowing themselves to be accused of copyright infringement.

In short, a three-strikes system is just as good an idea for print as it is for the Internet. Which country will be the first to adopt it?

Once we have adopted three-strikes for print, we can move on to other media. Next on the list: three-strikes systems for sound waves, and light waves. These media are too important to leave unprotected.

[Français]

Recovery Act Spending: Getting to the Bottom Line

Under most circumstances, government spending is slow and deliberate—a key fact that helps reduce the chances of waste and fraud. But the recently passed Recovery Act is a special case: spending the money quickly is understood to be essential to the success of the Act. We all know that shoppers in a hurry tend to get less value for their money. But, ironically, the overall macroeconomic impact of the stimulus (and hence the average stimulative effect per dollar spent) may be maximized by quick spending, even if the speed premium does increase the total amount of waste and abuse.

This situation creates a paradox for transparency and oversight efforts. On the one hand, the quicker pace of spending makes it all the more important to provide for public scrutiny, and to provide information in ways that will rapidly enable as many people as possible to take advantage of the stimulus opportunities available to them. On the other, the same rush that makes transparency important also reduces the time available for those within government to design and build an infrastructure for stimulus transparency.

One of the troubling tradeoffs that has been made thus far involves information about stimulus funds that flow from the federal government to states and then from states to localities. This pattern is rarer than you might think, since much of the Recovery Act spending flows more directly from federal agencies to its end recipients. But for funds that do follow a path from federal to state to local officials, recent guidance issued April 3 by the Office of Management and Budget (OMB) makes clear that the federal reporting infrastructure being created for Recovery.gov will not collect information about what the localities ultimately do with the funds.

OMB says that it does have the legal authority to require detailed reporting on “all levels of subawards,” reaching end recipients (Acme Concrete or whomever gets a contract or grant from the municipality at the end of the governmental chain). But in the context of its sprint to get at least some system into place as soon as possible (with the debut date for the Recovery.gov system already pushed back to October), OMB has left this deep-level reporting out of its immediate plans. The office says that it “plans to expand the reporting model in the future to also obtain this information, once the system capabilities and processes have been established.”

On Monday, ten congressmen sent a letter to OMB urging it to collect this detailed information “as early as possible.” One reason for OMB to formulate detailed operational plans in this area, as I argued in recent testimony before the House Committee on Oversight and Government Reform, is that clarity from the top will help states make competent choices about what if anything they should do to support or supplement the federal reporting. As the members of Congress write:

While it is positive that OMB goes on to reserve the right in the guidance to expand this reporting model in the future, it would seem exercising this right and requiring this level of reporting as early as possible would help entities prepare for the disclosures before projects begin and provide clarification for states as they begin investing in new infrastructure to track ARRA funds.

In the end, everyone agrees that this detailed information about subawards is important to have—OMB “plans to collect” it and the signatories to yesterday’s letter want collection to start “as soon as possible.” But how soon is that? We don’t really know. The details of hard choices facing OMB as it races to implement the Recovery.gov reporting system are themselves not public, and making them public might (or might not) itself slow down the development of the site. If no system were permitted to launch without fully detailed reporting of subawards, we might wait longer for the web site’s launch. How much longer? OMB might not itself be sure, since software development times are notoriously difficult to forecast, and OMB has never before been asked to build a system of this kind. OMB asserts that it’s moving as fast as it can to collect as much information as possible, and without slowing it down to ask for explanations, we can’t really check that assertion.

Transparency often reduces the degree to which citizens must trust public officials. But in this case, ironically, it seems most reasonable to operate on the optimistic but realistic assumption that the people working on Recovery Act transparency are doing their jobs well, and to hope for good results.

Breathalyzer Source Code Secrecy Endangers Minnesota Drunk Driving Convictions

The Minnesota Supreme Court ruled recently that defendants accused of drunk driving in the state are entitled to have their experts inspect the source code for the software in the Intoxilyzer breath-testing machines used by police to gauge the defendants’ blood alcohol levels. The defendants argued, successfully, that they were entitled to examine and challenge the evidence against them, including the design and functioning of devices used to generate that evidence.

The ruling puts many of the state’s drunk driving prosecutions on thin ice, because CMI, the Intoxilyzer’s maker, is withholding the source code and the state apparently has no way to force CMI to provide the code.

Eric Rescorla argues, reasonably, that breath testers have many potential failure modes unrelated to software, and that source code analysis can be labor-intensive and might not turn up any clear problems. Both arguments are valid, as far as they go.

I’m not a lawyer, so I won’t try to guess whether the court’s ruling was correct as a matter of law. But the ruling does seem right as a matter of policy. If we are troubled by criminal convictions relying on secret evidence, then we should also be troubled by convictions relying on evidence generated by a secret process. To the extent that the Intoxilyzer functions as a secret process, the state should not be relying on it in criminal prosecutions.

(Though I haven’t thought carefully about the question, I might potentially draw a different policy conclusion in a civil case, where the standard of proof is preponderance of evidence, rather than guilt beyond a reasonable doubt.)

The problem is illustrated nicely by a contradiction in the arguments that CMI and the state are making. On the one hand, they argue that the machine’s source code contains valuable trade secrets — I’ll call them the “secret sauce” — and that CMI’s business would be substantially harmed if its competitors learned about the secret sauce. On the other hand, they argue that there is no need to examine the source code because it operates straightforwardly, just reading values from some sensors and doing simple calculations to derive a blood alcohol estimate.

It’s hard to see how both arguments can be correct. If the software contains secret sauce, then by definition it has aspects that are neither obvious nor straightforward, and those aspects are important for the software’s operation. In other words, the secret sauce — whatever it is — must relevant to the defendants’ claims.

As in electronic voting, where we have seen similar secrecy arguments, one can’t help suspecting that the real “secret” is that the software quality is not what it should be. A previous study of source code from New Jersey breath testers did appear to find some embarrassing errors.

Let’s hope that breath tester companies can do better than e-voting companies. A rigorous, independent evaluation of the breath tester source code would either determine that the code is sound, or it would undercover problems that could then be fixed, to restore confidence in the machines. Either way, the police in Minnesota would end up with a reliable tool for giving drunk drivers the punishment they deserve.