September 25, 2017

Archives for October 2005

Do University Honor Codes Work?

Rick Garnett over at ProfsBlawg asked his readers about student honor codes and whether they work. His readers, who seem to be mostly lawyers and law students, chimed in with quite a few comments, most of them negative.

I have dealt with honor codes at two institutions. My undergraduate institution, Caltech, has a simply stated and all-encompassing honor code that is enforced entirely by the students. My sense was that it worked very well when I was there. (I assume it still does.) Caltech has a small (800 students) and relatively homogeneous student body, with a student culture that features less student versus student competitiveness than you might expect. Competition there tends to be student versus crushing workload. The honor code was part of the social contract among students, and everybody appreciated the benefits it provided. For example, you could take your final exams at the time and place of your choosing, even if they were closed-book and had a time limit; you were trusted to follow the rules.

Contrasting this to the reports of Garnett’s readers, I can’t help but wonder if honor codes are especially problematic in law schools. There is reportedly more cutthroat competition between law students, which could be more conducive to ethical corner-cutting. Competitiveness is an engine of our adversarial legal system, so it’s not surprising to see law students so eager to win every point, though it is disappointing if they do so by cheating.

I’ve also seen Princeton’s disciplinary system as a faculty member. Princeton has a student-run honor code system, but it applies only to in-class exams. I don’t have any first-hand experience with this system, but I haven’t heard many complaints. I like the system, since it saves me from the unpleasant and trust-destroying task of policing in-class exams. Instead, I just hand out the exams, then leave the room and wait nearby to answer questions.

Several years ago, I did a three-year term on Princeton’s Student-Faculty Committee on Discipline, which deals with all serious disciplinary infractions, whether academic or non-academic, except those relating to in-class exams. This was hard work. We didn’t hear a huge number of cases, but it took surprisingly long to adjudicate even seemingly simple cases. I thought this committee did its job very well.

One interesting aspect of this committee was that faculty and students worked side by side. I was curious to see any differences between student and faculty attitudes toward the disciplinary process, but it turned out there were surprisingly few. If anything, the students were on average slightly more inclined to impose stronger penalties than the faculty, though the differences were small and opinions shifted from case to case. I don’t think this reflected selection bias either; discussions with other students over the years have convinced me that students support serious and uniform punishment for violators. So I don’t think there will be much difference in the outcomes of a student-run versus a faculty-run disciplinary process.

One lesson from Garnett’s comments is that an honor code will die if students decide that enforcement is weak or biased. Here the secrecy of disciplinary processes, which is of course necessary to protect the accused, can be harmful. Rumors do circulate. Sometimes they’re inaccurate but can’t be corrected without breaching secrecy. For example, when I was on Princeton’s discipline committee, some students believed that star athletes or students with famous relatives would be let off easier. This was untrue, but the evidence to contradict it was all secret.

Academic discipline seems to have a major feedback loop. If students believe that the secret disciplinary processes are generally fair and stringent, they will be happy with the process and will tend to follow the rules. This leaves the formal disciplinary process to deal with the exceptions, which a good process will be able to handle. Students will buy in to the premise of the system, and most people will be happy.

If, on the other hand, students lose their trust in the fairness of the system, either because of false rumors or because the system is actually unfair, then they’ll lose their aversion to rule-breaking and the system, whether honor-based or not, will break down. Several of Garnett’s readers tell a story like this.

One has to wonder whether it makes much difference in practice whether a system is formally honor-based or not. Either way, students have an ethical duty to follow the rules. Either way, violations will be punished if they come to light. Either way, at least a few students will cheat without getting caught. The real difference is whether the institution conspicuously trusts the students to comply with the rules, or whether it instead conspicuously polices compliance. Conspicuous trust is more pleasant for everybody, if it works.

[Feel free to talk about your own experiences in the comments. I’m especially eager to hear from current or past Princeton students.]

Breathalyzers and Open Source

Lawyers for 150 Floridians accused of drunk driving have asked a court to order the disclosure of the source code for software running in the breathalyzer machines used by police to analyze their blood alcohol level, according to a Tom Sanders story on vunet.

The defendants say they have the right to examine the machines that accused them, and that a meaningful examination requires access to the machines’ software. Prosecutors say the code is a trade secret.

The accused are right that one needs the code to understand fully how the machines work. The machines consist of sensors, a user interface, and control software. The software is the “brain” of the machine, and it is almost certainly involved in the calculations that derive a blood alcohol value from the sensor readings, as well as the display of the calculated value. If the accused have the right to fully examine the machines – and the article says that they do under Florida law – then they should see the source code.

Contrary to the article and some other commentators, this is not a dispute over whether the software should be open source. The accused aren’t seeking to open the software to everybody; they only want it opened to their legal teams.

There are standard practices for handling trade-secret information that must be turned over in court cases. A court will typically establish a protective order, which is a kind of nondisclosure agreement covering secret material that is turned over by one side to the other. The protective order will require parties to keep the information secret and to use it only for purposes related to the court proceedings. Typically the information can be turned over to a limited number of expert analysts who have also signed the protective order. Documents containing secret information are filed under seal, and testimony about secret matters may take place in a closed courtroom.

So this issue is not about open source, but about ensuring fairness for the accused. If they’re going to be accused based on what some machine says, then they ought to be allowed to challenge the accuracy of the machine. And they can’t do that unless they’re allowed to know how the machine works.

You might argue that the machine’s technical manuals convey enough information. Having read many manuals and examined the innards of many software systems, I’m skeptical of such claims. Often, knowing how the maker says a machine works is a poor substitute for knowing how it actually works. If a machine is flawed, it’s likely the maker will either (a) not know about the flaw or (b) be unwilling to admit it exists.

If the article’s description of Florida law is correct, this seems like a pretty easy decision for the court.

Mossberg Takes on DRM, Urges CD-DRM Boycott

Walt Mossberg, whose Personal Technology column in the Wall Street is a must-read for many influential but non-geeky technology enthusiasts, discusses the DRM issue in today’s column. No much in the column will be new to regular readers here, or to anyone immersed in the digital copyright issue. But of course Mossberg writes for a different audience, and the column serves that audience well by explaining the issues clearly and maintaining a moderate tone.

In my view, both sides have a point, but the real issue isn’t DRM itself – it’s the manner in which DRM is used by copyright holders. Companies have a right to protect their property, and DRM is one means to do so. But treating all consumers as potential criminals by using DRM to overly limit their activities is just plain wrong.

Let’s be clear: The theft of intellectual property on the Internet is a real problem. Millions of copies of songs, TV shows and movies are being distributed over the Internet by people who have no legal right to do so, robbing media companies and artists of rightful compensation for their work.

Even if you think the record labels and movie studios are stupid and greedy, as many do, that doesn’t entitle you to steal their products. If your local supermarket were run by people you didn’t like, and charged more than you thought was fair, you wouldn’t be entitled to shoplift Cheerios from its shelves.

On the other hand, I believe that consumers should have broad leeway to use legally purchased music and video for personal, noncommercial purposes in any way they want – as long as they don’t engage in mass distribution. They should be able to copy it to as many personal digital devices as they own, convert it to any format those devices require, and play it in whatever locations, at whatever times, they choose.

Mossberg urges music and movie companies to use DRM to limit large-scale pirates, while giving ordinary users wide leeway for personal use.

Instead of using DRM to stop some individual from copying a song to give to her brother, the industry should be focusing on ways to use DRM to stop the serious pirates – people who upload massive quantities of music and videos to so-called file-sharing sites, or factories in China that churn out millions of pirate CDs and DVDs.

This is a nice vision, but it’s not really possible. It’s abundantly clear by now that no DRM system can stop serious pirates. A DRM system that stops serious pirates, and simultaneously gives broad leeway to ordinary users, is even harder to imagine. It’s not going to happen.

Although he doesn’t address it directly, Mossberg implicitly rejects the other argument for DRM, which says that DRM can enable new pricing models for content and can therefore foster market efficiency. Mossberg says flatly that consumers should have a broad right to make personal uses of content they have bought.

The most surprising part of the column – remember that this is in the Wall Street Journal – is Mossberg’s call for a boycott of products with restrictive DRM, such as copy-protected CDs.

Until then, I suggest that consumers avoid stealing music and videos, but also boycott products like copy-protected CDs that overly limit usage and treat everyone like a criminal. That would send the industry a message to use DRM more judiciously.

Whether it’s a flat boycott, or just a disinclination to buy such products, this would have an impact on the industry’s DRM choices.

To make it happen, people need to learn which CDs use DRM and which don’t. One way to tell on CDs is to look for the official CD logo on the package. If the CD logo is missing, the disc probably doesn’t comply with the CD standard, and the noncompliance is probably caused by DRM. Alternatively, somebody could set up a website with information about which discs used DRM. It would be nice, too, to have a site with information about DVDs, to keep track, for instance, of which discs force viewers to watch movie previews before seeing the movie they bought.

It can’t be too hard to set up such a site. If you put ads on it, you could probably make a profit. Who wants to build it?