November 23, 2024

Archives for 2005

Course Blog: Lessons Learned

This semester I had students in my course, “Information Technology and the Law,” write for a course blog. This was an experiment, but it worked out quite well. I will definitely do it again.

We required each of the twenty-five students in the course to post at least once a week. Each student was assigned a particular day of the week on which his or her entries were due. We divided the due dates evenly among the seven days of the week, to ensure an even flow of new posts, and to facilitate discussion among the students. The staggered due dates worked nicely, and had the unexpected benefit of evening out the instructors’ and students’ blog reading workload.

To be honest, I’m not sure how religiously students read the blog. Many entries had comments from other students, but I suspect that many students read the blog irregularly. My guess is that most of them read it, most of the time.

We told students that they should write 400-500 words each week, about any topic related to the course. As expected, most students wrote about the topics we were discussing in class at the moment. Some students would read ahead a bit and then post about the readings before we discussed them in class. Others would reflect on recent in-class discussions. In both cases, the blogging helped to extend the class discussion. A few students wrote about material outside the readings, but within the course topic.

One of the biggest benefits, which I didn’t fully appreciate in advance, was that students got to see the writing their peers submitted. This was valuable not only for the exchange of ideas, but also in helping students improve their writing. Often students learn about the standard of performance only by reading comments from a grader; here they could see what their peers were producing.

To protect students’ privacy, we gave them the option of writing under a pseudonym. Seven of twenty-five students used a pseudonym. Students had to reveal their pseudonym to the instructors, but it was up to them whether to reveal it to the other students in the course. A few students chose pseudonyms that would be obvious to people in the course; for example, one student used his first name. Most of the others seemed willing to reveal their pseudonyms to the rest of the class, though not everyone had occasion to do so.

I was pleasantly surprised by the quality of the writing. Most of it was good, and some was top-notch. Comments from peers, and from outsiders, were also helpful. However, it seems unlikely that many outsiders would read such a course blog, given the sheer volume of postings.

The logistics worked out pretty well. We used WordPress, with comment moderation enabled (to fend off comment spam). We sent out a brief email with instructions at the beginning, and students caught on quickly.

On the whole, the course blog worked out better than expected, and I will use the same method in the future.

[If any students from the course read this, please chime in in the comments. I already submitted course grades, so you can be brutally honest.]

Register of Copyrights Misunderstands Copyright

The office of the U.S. Register of Copyrights recently released its annual report for 2004. Along with some useful information about the office’s function, the report includes a sort of editorial about the copyright system, entitled “Copyright in the Public Eye.” The editorial displays a surprising misunderstanding of the purposes of copyright.

Consider, for example, this sentence:

The Founders knew what they were doing when they made explicit that Congress was to secure to authors an “exclusive Right.” They understood that individual rights, especially property-like rights, were the key to establishing a stable and productive society.

Note the subtle rewriting of the Constitutional language. The Constitution does not direct Congress to establish copyright, but merely allows it to do so. Let’s be clear: the implication that the Founders would approve of today’s copyright statute finds no real support in the historical record. The first Congress passed a copyright act, and it was vastly narrower than the one we have today.

The Constitution allows Congress to do other things, too, such as imposing taxes and regulating interstate commerce. But nobody would argue that the Founders wanted the broadest possible taxation and regulation. The Founders trusted Congress to use its power judiciously, in copyright as in other areas.

Continuing with the Register of Copyrights editorial:

[The Founders] also trusted copyright owners to use those rights for the public good by offering creative works to the public. It is important for copyright owners to fulfill their end of the bargain with the public – to use the exclusive rights they have been granted to provide the public with convenient access to copyrighted works.

The implication here is that copyright owners can choose whether to “fulfill their end of the bargain with the public.” (If the bargain is mandatory, why bother urging copyright owners to fulfill it?) In other words, the public’s ability to use copyrighted works exists only at the pleasure of copyright owners. This is contrary even to our current bloated copyright law, which carefully limits the exclusive rights of copyright owners, and says explicitly that certain types of use are not infringement.

Continuing:

How copyright is perceived will largely depend on how technological measures limit reproduction and distribution in ways that are painless and invisible to the public. New services need to earn a reputation based on the things they allow people to do with copyrighted works, rather than on what they prevent people from doing.

Even ignoring the questionable technology assumptions – that technology can limit redistribution, and can do so “in ways that are painless and invisible” – the implication here is that the law already allows copyright owners to overreach, but the Register of Copyrights hopes that they don’t do so. In other words, it’s up to the copyright owners to decide what the future of copyright should be.

And how will this situation play out? Let’s go back to the first paragraph of the editorial:

For the first time ordinary consumers come face-to-face with copyright as something that regulates them directly. In this situation, the copyright owner is more likely to see the user as an infringer than as a customer.

And they wonder why copyright is unpopular with the public.

New Study on Effects of E-Voting

David Card and Enrico Moretti, two economists from UC Berkeley, have an interesting new paper that crunches data on the 2004 election, to shed light on the effect of touchscreen voting. The paper looks reasonable to me, but my background is not in social science so others are better placed than me to critique it. Here, I’ll summarize the paper’s findings.

The researchers start with datasets on county-by-county vote results in the 2004 U.S. presidential election, and county-by-county demographics, along with a list of counties that used DREs (i.e., touchscreen voting machines). It turns out that counties that used DREs tended to vote more strongly for Bush than counties that didn’t. This effect, by itself, isn’t very interesting, since there are many possible causes. For example, DREs were more popular in the South, and Bush was more popular there too.

To get a more interesting result, they redid the same calculation, while controlling for many of the factors that might have affected Bush’s vote share. To be specific, they controlled for past voting patterns (Republican and third-party voting shares in the 1992, 1996, and 2000 presidential elections), for county demographics (percent black, percent Hispanic, percent religious, percent college-educated, percent in the military, percent employed in agriculture), for average income, and for county population. They also included a per-state dummy variable that would capture any effects that were the same across all counties in a particular state. After controlling for all of these things, they still found that DRE counties tended to tilt toward Bush, compared to non-DRE counties. This discrepancy, or “DRE effect” amounted to 0.21% of the vote.

So did Republicans steal the election? The researchers turn to that question next. They observe that if the DRE effect was caused by Republican cheating, then we would expect the DRE effect to be larger in places where Republicans had a motive to cheat (because the election was close), and where Republicans had an opportunity to cheat (because they controlled the election bureaucracy). Yet further analysis shows that the DRE effect was not larger in states where the election was close, and was not larger in states with Republican governors or Republicans secretaries of state. Therefore it seems unlikely that outright vote-stealing can account for the DRE effect.

The researchers next looked at how DRE use correlated with voter turnout. They found that voter turnout was roughly 1% lower in counties that used DREs, after controlling for all of the factors listed above. Interestingly, the drop in turnout tended to be larger in counties with larger Hispanic populations. (The same effect does not seem to exist for black voters.) This suggests a possible cause of the DRE effect: DREs may suppress turnout among Hispanic voters, who tend to vote for Democrats overall (although not in Florida).

Why might DREs suppress the Hispanic vote? Perhaps Hispanics are more likely to be intimidated by the high-tech DREs. Perhaps DREs are harder to use for voters who aren’t native English speakers. Perhaps DREs made people wait longer to vote, and Hispanic voters were less able or less willing to wait. Or perhaps there is some other cultural issue that made Hispanic voters wary of DREs.

It’s worth noting, though, that when the researchers estimated the magnitude of the Hispanic-vote-suppression mechanism, they found that it accounted for only about 15% of the overall DRE effect. Most of the DRE effect is still unexplained.

This is an interesting paper, but is far from the last word on the subject.

UPDATE (Thur. May 19): Steve Purpura, who knows this stuff much better than I do, has doubts about this study. See the comments for his take.

RFID on DVDs

A group at UCLA is studying how to deter DVD copying by putting RFID chips on DVDs, according to a story in RFID Journal by Mary Catherine O’Connor. (Noted by Rik Lambers at CoCo.) The article doesn’t say much about what they are planning. Reading between the lines, it looks like the group hasn’t reached the really interesting technical challenges yet.

Putting RFID on DVDs could be a terrible idea if done the wrong way. But if done correctly, it just might make sense.

One bad approach is to store part of the decryption key (needed to decrypt the data on the DVD) on an RFID chip that is attached to the DVD. The DVD player would read this partial key from the RFID and use it, along with the DVD player’s secret key, to decrypt the content. Doing this doesn’t make the content much harder to copy. And it creates several new problems: the new DVDs wouldn’t play in existing players, and the RFID might expose customers to tracking if they carry RFID-DVDs around with them.

A better approach is to use RFID to put a unique “bonus code” on each individual DVD disc. Then you can provide online “bonus features” to users who present a valid bonus code that isn’t being used elsewhere at the same time. If the bonus features are good enough, users will value getting a bonus code and so will be willing to pay more for genuine discs. And the discs will work in existing DVD players, albeit without the bonus features.

Of course, bonus codes can be copied, just like content. But if bonus codes are used to get live access to a website, and that website checks to avoid duplicate use of bonus codes, then widely copied bonus codes will be less useful, and users will have an incentive to protect their bonus codes from copying.

You don’t need RFID to bundle bonus codes with DVDs. Instead, you could put the bonus code onto the DVD with the content, but this may raise manufacturing costs, by requiring each DVD to contain some unique data, rather than being stamped out in large, identical batches. Or you could print the bonus code onto a sticker and attach the sticker to the DVD case or to the DVD itself. That’s low-tech and effective, but it requires the user to manually enter the bonus code, which is a hassle. RFID allows the DVD player to read the bonus code directly.

If you wanted, you could put the bonus code on both a sticker and an RFID. The DVD player would read the RFID if it could; otherwise the user could enter information from the sticker. Users who worried about privacy could tear off the RFID and just use the sticker. Computer-based DVD players could remember the bonus codes, so the user didn’t need the RFID or sticker anymore.

There are still privacy problems, but these could be addressed if you had a more advanced RFID chip that could execute the right cryptographic protocol. Then the chip could authenticate itself to the bonus features website, in a way that didn’t allow any individual RFID chip to be tracked from moment to moment.

This may be overkill. It’s a lot of technology to get you a relatively small benefit, compared to alternatives like using stickers, or using a disc manufacturing process that can put a small amount of unique data on each disc. But the idea of using RFID with DVDs isn’t totally crazy.

Newsweek Fails AP Math

Newsweek just released its list of the top 100 U.S. high schools. Like the more famous U.S. News college rankings, Newsweek relies on a numerical formula. Here is Newsweek’s formula:

Public schools are ranked according to a ratio devised by Jay Mathews: the number of Advanced Placement and/or International Baccalaureate tests taken by all students at a school in 2004 divided by the number of graduating seniors.

Both parts of this ratio are suspect. In the numerator, they count the number of students who show up for AP/IB tests, not the number who get an acceptable score. Schools that require their students to take AP/IB tests will do well on this factor, regardless of how poorly they educate their students. In the denominator is the number of students who graduate. That’s right – every student who graduates lowers the school’s rating.

To see the problems with Newsweek’s formula, let’s consider a hypothetical school, Monkey High, where all of the students are monkeys. As principal of Monkey High, I require my students to take at least one AP test. (Attendance is enforced by zookeepers.) The monkeys do terribly on the test, but Newsweek gives them credit for showing up anyway. My monkey students don’t learn enough to earn a high school diploma – not to mention their behavioral problems – so I flunk them all out. Monkey High gets an infinite score on the Newsweek formula: many AP tests taken, divided by zero graduates. It’s the best high school in the universe!

Why does Newsweek use this formula? There are two reasons, I think. First, they seem to conflate AP courses with AP exams. It is indeed good if more students take genuine AP courses, which teach the most challenging material. But there’s no point in having students take the AP exams if they’re not prepared. Some schools require their students to take AP exams, whether they’re prepared or not. The Newsweek formula rewards those schools. Here’s Jay Mathews, in Newsweek’s online FAQ:

If I thought that those districts who pay for the test and require that students take it were somehow cheating, and giving themselves an unfair advantage that made their programs look stronger than they were, I would add that asterisk or discount them in some way. But I think the opposite is true. Districts who spend money to increase the likelihood that their students take AP or IB tests are adding value to the education of their students. Taking the test is good. It gives students a necessary taste of college trauma. It is bad that many students in AP courses avoid taking the tests just because they prefer to spend May of their senior year sunning themselves on the beach or buying their prom garb. If paying your testing fee persuades you, indeed forces you, to take the test, that is good, just as it is good if a school spends money to hire more AP teachers or makes it difficult for students to drop out of AP without a good reason.

Second, it appears that better data would have been harder to get. Schools report the number of AP tests taken, but it appears that many don’t report anything about the scores their students receive.

Given Newsweek’s questionable formula, is it picking the best schools in the U.S.? Not likely. My wife, on reading Newsweek’s list, was surprised to see Oxnard High (of Oxnard, California) ranked as the 60th best. She was born in Oxnard and went to a nearby high school, and had never thought of Oxnard High as an elite school.

(To be clear: in no way am I comparing Oxnard High to Monkey High. Oxnard High seems like a pretty typical school by U.S. standards. Many of my wife’s friends graduated from Oxnard High. But, despite what Newsweek says, it’s not one of the very best schools in the country.)

Looking at standardized test scores – the actual scores, not the percentage of students who showed up for the test – Oxnard High appears to be a bit below average among California schools. Oxnard High students had an average SAT score of 997, compared to a state average of 1012; and 23% of Oxnard students took the SAT, compared to 37% statewide. 28% of Oxnard students met University of California admissions requirements, compared to 34% statewide.

What really makes Newsweek’s formula look bad is the data on AP test scores. If we use an improved version of Newsweek’s formula – dividing the number of AP scores of 3 or above (on a 5-point scale), by the number of enrolled juniors and seniors – Oxnard High scores 0.08, compared to a state average of 0.24. Many Oxnard High students take AP tests, but few score well. These are not the statistics of a top-performing school.

Here’s my report card for Newsweek’s high school ratings:

English: Proficient
Math: Needs Work