November 29, 2024

Breathalyzers and Open Source

Lawyers for 150 Floridians accused of drunk driving have asked a court to order the disclosure of the source code for software running in the breathalyzer machines used by police to analyze their blood alcohol level, according to a Tom Sanders story on vunet.

The defendants say they have the right to examine the machines that accused them, and that a meaningful examination requires access to the machines’ software. Prosecutors say the code is a trade secret.

The accused are right that one needs the code to understand fully how the machines work. The machines consist of sensors, a user interface, and control software. The software is the “brain” of the machine, and it is almost certainly involved in the calculations that derive a blood alcohol value from the sensor readings, as well as the display of the calculated value. If the accused have the right to fully examine the machines – and the article says that they do under Florida law – then they should see the source code.

Contrary to the article and some other commentators, this is not a dispute over whether the software should be open source. The accused aren’t seeking to open the software to everybody; they only want it opened to their legal teams.

There are standard practices for handling trade-secret information that must be turned over in court cases. A court will typically establish a protective order, which is a kind of nondisclosure agreement covering secret material that is turned over by one side to the other. The protective order will require parties to keep the information secret and to use it only for purposes related to the court proceedings. Typically the information can be turned over to a limited number of expert analysts who have also signed the protective order. Documents containing secret information are filed under seal, and testimony about secret matters may take place in a closed courtroom.

So this issue is not about open source, but about ensuring fairness for the accused. If they’re going to be accused based on what some machine says, then they ought to be allowed to challenge the accuracy of the machine. And they can’t do that unless they’re allowed to know how the machine works.

You might argue that the machine’s technical manuals convey enough information. Having read many manuals and examined the innards of many software systems, I’m skeptical of such claims. Often, knowing how the maker says a machine works is a poor substitute for knowing how it actually works. If a machine is flawed, it’s likely the maker will either (a) not know about the flaw or (b) be unwilling to admit it exists.

If the article’s description of Florida law is correct, this seems like a pretty easy decision for the court.

Mossberg Takes on DRM, Urges CD-DRM Boycott

Walt Mossberg, whose Personal Technology column in the Wall Street is a must-read for many influential but non-geeky technology enthusiasts, discusses the DRM issue in today’s column. No much in the column will be new to regular readers here, or to anyone immersed in the digital copyright issue. But of course Mossberg writes for a different audience, and the column serves that audience well by explaining the issues clearly and maintaining a moderate tone.

In my view, both sides have a point, but the real issue isn’t DRM itself – it’s the manner in which DRM is used by copyright holders. Companies have a right to protect their property, and DRM is one means to do so. But treating all consumers as potential criminals by using DRM to overly limit their activities is just plain wrong.

Let’s be clear: The theft of intellectual property on the Internet is a real problem. Millions of copies of songs, TV shows and movies are being distributed over the Internet by people who have no legal right to do so, robbing media companies and artists of rightful compensation for their work.

Even if you think the record labels and movie studios are stupid and greedy, as many do, that doesn’t entitle you to steal their products. If your local supermarket were run by people you didn’t like, and charged more than you thought was fair, you wouldn’t be entitled to shoplift Cheerios from its shelves.

On the other hand, I believe that consumers should have broad leeway to use legally purchased music and video for personal, noncommercial purposes in any way they want – as long as they don’t engage in mass distribution. They should be able to copy it to as many personal digital devices as they own, convert it to any format those devices require, and play it in whatever locations, at whatever times, they choose.

Mossberg urges music and movie companies to use DRM to limit large-scale pirates, while giving ordinary users wide leeway for personal use.

Instead of using DRM to stop some individual from copying a song to give to her brother, the industry should be focusing on ways to use DRM to stop the serious pirates – people who upload massive quantities of music and videos to so-called file-sharing sites, or factories in China that churn out millions of pirate CDs and DVDs.

This is a nice vision, but it’s not really possible. It’s abundantly clear by now that no DRM system can stop serious pirates. A DRM system that stops serious pirates, and simultaneously gives broad leeway to ordinary users, is even harder to imagine. It’s not going to happen.

Although he doesn’t address it directly, Mossberg implicitly rejects the other argument for DRM, which says that DRM can enable new pricing models for content and can therefore foster market efficiency. Mossberg says flatly that consumers should have a broad right to make personal uses of content they have bought.

The most surprising part of the column – remember that this is in the Wall Street Journal – is Mossberg’s call for a boycott of products with restrictive DRM, such as copy-protected CDs.

Until then, I suggest that consumers avoid stealing music and videos, but also boycott products like copy-protected CDs that overly limit usage and treat everyone like a criminal. That would send the industry a message to use DRM more judiciously.

Whether it’s a flat boycott, or just a disinclination to buy such products, this would have an impact on the industry’s DRM choices.

To make it happen, people need to learn which CDs use DRM and which don’t. One way to tell on CDs is to look for the official CD logo on the package. If the CD logo is missing, the disc probably doesn’t comply with the CD standard, and the noncompliance is probably caused by DRM. Alternatively, somebody could set up a website with information about which discs used DRM. It would be nice, too, to have a site with information about DVDs, to keep track, for instance, of which discs force viewers to watch movie previews before seeing the movie they bought.

It can’t be too hard to set up such a site. If you put ads on it, you could probably make a profit. Who wants to build it?

EFF Researchers Decode Hidden Codes in Printer Output

Researchers at the EFF have apparently confirmed that certain color printers put hidden marks in the pages they print, and they have decoded the marks for at least one printer model.

The marks from Xerox DocuColor printers are encoded in an array of very small yellow dots that appear all over the page. The dots encode the date and time when the page was printed, along with what appears to be a serial number for the printer. You can spot the dots with blue light and a 10X magnifier, and you can then decode the dots to get the date, time, and serial number.

Many other printers appear to do something similar; the EFF has a list.

The privacy implications are obvious. It’s now possible to tell when a document was printed, and when two documents were printed on the same printer. It’s also possible, given a document and a printer, to tell whether the document was printed on that printer.

Apparently, this was done at direction of the U.S. government.

The U.S. Secret Service admitted that the tracking information is part of a deal struck with selected color laser printer manufacturers, ostensibly to identify counterfeiters. However, the nature of the private information encoded in each document was not previously known.

Xerox previously admitted that it provided these tracking dots to the government, but indicated that only the Secret Service had the ability to read the code.

The assertion that only the Secret Service can read the code is false. The code is quite straightforward. For example, there is one byte for (the last two digits of) the year, one byte for the month, one byte for the day, one byte for the hour, and one byte for the minute.

Now that the code is known, it should be possible to forge the marks. For example, I could cook up an array of little yellow dots that encode any date, time, and serial number I like. Then I could add the dots to any image I like, and print out the image-plus-dots on a printer that doesn’t make the marks. The resulting printout would have genuine-looking marks that contain whatever information I chose.

This could have been prevented by using cryptography, to make marks that can only be decoded by the Secret Service, and that don’t allow anyone but the secret service to detect whether two documents came from the same printer. This would have added some complexity to the scheme, but that seems like a good tradeoff in a system that was supposed to stay secret for a while.

A Visit From Bill Gates

Bill Gates visited Princeton on Friday, accompanied by his father, a prominent Seattle lawyer who now heads the Gates Foundation, and by Kevin Schofield, a Microsoft exec (and Princeton alumnus) who helped to plan the university visits.

After speaking briefly with Shirley Tilghman, Princeton’s president, Mr. Gates spent an hour in a roundtable discussion with a smallish group of computer science faculty. I was lucky enough to be one of them. The meeting was closed, so I won’t give you a detailed play-by-play. Essentially, we told him about what is happening in computer science at Princeton; he asked questions and conversation ensued. We talked mostly about computer science education. Along the way I gave a quick description of the new infotech policy course that will debut in the spring. Overall, it was a good, high-energy discussion, and Mr. Gates showed a real passion for computer science education.

After the roundtable, he headed off to Richardson Auditorium for a semi-public lecture and Q&A session. (I say semi-public because there wasn’t space for everybody who wanted to get in; tickets were allocated to students by lottery.) The instructions that came with my ticket made it seem like security in the auditorium would be very tight (no backpacks, etc.), but in fact the security measures in place were quite unobtrusive. An untrained eye might not have noticed anything different from an ordinary event. I showed up for the lecture at the last minute, coming straight from the faculty roundtable, so I one of the worst seats in the whole place. (Not that I’m complaining – I certainly wouldn’t have traded away my seat in the faculty roundtable for a better seat at the lecture!)

After an introduction from Shirley Tilghman, Mr. Gates took the stage. He stood alone on the stage and talked for a half-hour or so. His presentation was punctuated by two videos. The first showed a bunch of recent Princeton alums who work at Microsoft talking about life at Microsoft in a semi-serious, semi-humorous way. (The highlight was seeing Corey in a toga.) The second video was a five-minute movie in which Mr. Gates finds himself in the world of Napoleon Dynamite. It co-stars Jon Heder, who played Napoleon in the movie. I haven’t seen the original movie but I’m told that many of the lines and gags in the video come from the movie. People who know the original movie seem to have found the video funny.

The theme of the lecture was the seamless coolness of the future computing environment. It was heavy on promotion and demonstrations of Microsoft products.

The Q&A was pretty interesting. He was asked how to reconcile his current cheerleading for C.S. education with his own history of dropping out of college. He had a funny and thoughful answer. I assume he’s had plenty of chances to hone his answer to that question.

A student asked him a question about DRM. His answer was fairly general, talking about the importance of both consumer flexibility and revenue for creators. He went on to say some harsh things about Blu-Ray DRM, saying that the system over-restricted consumers’ use and that its content-industry backers were making a mistake by pushing for it.

(At this point I had to leave due to a previous commitment, so from here on I’m relying on reports from people who were there.)

Another student asked him about intellectual property, suggesting that Microsoft was both a beneficiary and a victim of a strong patent system. Mr. Gates said that the patent system is basically sound but could benefit from some tweaking. He didn’t elaborate, but I assume he was referring to patent reform suggestions Microsoft has made previously.

After the Q&A, Mr. Gates accepted the “Crystal Tiger” award from a student group. Then he left for his next university visit, reportedly at Howard University.

Tax Breaks for Security Tools

Congress may be considering offering tax breaks to companies that deploy cybersecurity tools, according to an Anne Broache story at news.com. This might be a good idea, depending on how it’s done.

I’ve written before about the economics of cybersecurity. A user’s investment in security protects the user himself; and he has an incentive to pay for the efficient level of protection for himself. But each user’s security choices also affect others. If Alice’s computer is compromised, it can be used as a springboard for attacking Bob’s computer, so Alice’s decisions affect Bob’s security. Alice has little or no incentive to invest in protecting Bob. This kind of externality is common and leads to underinvestment in security.

Public policy can try to fix this by adjusting incentives in the right direction. A good policy will boost incentives to deploy the kinds of security measures that tend to protect others. Protecting oneself is good, but there is already an adequate incentive to do that; what we want is a bigger incentive to protect others. (To the extent that the same steps tend to protect both oneself and others, it makes sense to boost incentives for those steps too.)

A program along these lines would presumably give tax breaks to people and organizations that use networked computers in a properly secure way. In an ideal world, breaks would be given to those who do well in managing their systems to protect others. In practice, of course, we can’t afford to do a fancy security evaluation on each taxpayer to see whether he deserves a tax break, so we would instead give the break to those who meet some formalized criteria that serve as a proxy for good security. Designing these criteria so that they correlate well with the right kind of security, and so that they can’t be gamed, is the toughest part of designing the program. As Bruce Schneier says, the devil is in the details.

Another approach, which may be what Rep. Lundgren is trying to suggest in the original story, is to give tax breaks to companies that develop security technologies. A program like this might just be corporate welfare, or it might be designed to have a useful public purpose. To be useful, it would have to lead to lower prices for the right kinds of security products, or better performance at the same price. Whether it would succeed at this depends again on the details of how the program is designed.

If the goal is to foster more capable security products in the long run, there is of course another approach: government could invest in basic research in cybersecurity, or at least it could reverse the current disinvestment.