October 23, 2020


I wrote last week about the Analog Hole Bill, which would require almost all devices that handle analog video signals to implement a particular anti-copying scheme called CGMS-A + VEIL. Today I want to talk about how that scheme works, and what we can learn from its design.

CGMS-A + VEIL is, not surprisingly, a combination of two discrete signaling technologies called CGMS-A and VEIL. Both allow information to be encoded in an analog video signal, but they work in different ways.

CGMS-A stores a few bits of information in a part of the analog video signal called the vertical blanking interval (VBI). Video is transmitted as a series of discrete frames that are displayed one by one. In analog video signals, there is an empty space between the frames. This is the VBI. Storing information there has the advantage that it doesn’t interfere with any of the frames of the video, but the disadvantage that the information, being stored in part of the signal that nobody much cares about, is easily lost. (Nowadays, closed captioning information is stored in the VBI; but still, VBI contents are easily lost.) For example, digital video doesn’t have a VBI, so straight analog-to-digital translation will lose anything stored in the VBI. The problem with CGMS-A, then, is that it is too fragile and will often be lost as the signal is stored, processed, and translated.

There’s one other odd thing about CGMS-A, at least as it is used in the Analog Hole Bill. It’s remarkably inefficient in storing information. The version of CGMS-A used there (with the so-called RCI bit) stores three bits of information (if it is present), so it can encode eight distinct states. But only four distinct states are used in the bill’s design. This means that it’s possible, without adding any bits to the encoding, to express four more states that convey different information about the copyright owner’s desires. For example, there could be a way for the copyright owner to signal that the customer was free to copy the video for personal use, or even that the customer was free to retransmit the video without alteration. But our representatives didn’t see fit to support those options, even though there are unused states in their design.

The second technology, VEIL, is a watermark that is inserted into the video itself. VEIL was originally developed as a way for TV shows to send signals to toys. If you pointed the toy at the TV screen, it would detect any VEIL information encoded into the TV program, and react accordingly.

Then somebody got the idea of using VEIL as a “rights signaling” technology. The idea is that whenever CGMS-A is signaling restrictions on copying, a VEIL watermark is put into the video. Then if a signal is found to have a VEIL watermark, but no CGMS-A information, this is taken as evidence that CGMS-A information must have been lost from that signal at some point. When this happens, the bill requires that the most restrictive DRM rules be applied, allowing viewing of the video and nothing else.

Tellingly, advocates of this scheme do their best to avoid calling VEIL a “watermark”, even though that’s exactly what it is. A watermark is an imperceptible (or barely perceptible) component, added to audio or video signal to convey information. That’s a perfect description of VEIL.

Why don’t they call it a watermark? Probably because watermarks have a bad reputation as DRM technologies, after the Secure Digital Music Initiative (SDMI). SDMI used two signals, one of which was a “robust” watermark, to encode copy control information in content. If the robust watermark was present but the other signal was absent, this was taken as evidence that something was wrong, and strict restrictions were to be enforced. Sound familiar?

SDMI melted down after its watermark candidates – all four of them – were shown to be removable by an adversary of modest skill. And an adversary who could remove the watermark could then create unprotected copies of the content.

Is the VEIL watermark any stronger than the SDMI watermarks? I would expect it to be weaker, since the VEIL technology was originally designed for an application where accidental loss of the watermark was a problem, but deliberate removal by an adversary was not an issue. So how does VEIL work? I’ll write about that soon.

UPDATE (23 Jan): An industry source tells me that one factor in the decision not to call VEIL a watermark is that some uses of watermarks for DRM are patented, and calling it a watermark might create uncertainty about whether it was necessary to license watermarking patents. Some people also assert (incorrectly, in my view) that a watermark must encode some kind of message, beyond just the presence of the watermark. My view is still that VEIL is accurately called a watermark.

The Professional Device Hole

Any American parent with kids of a certain age knows Louis Sachar’s novel Holes, and the movie made from it. It’s set somewhere in the Texas desert, at a boot camp for troublemaking kids. The kids are forced to work all day in the scorching sun, digging holes in the rock-hard ground then re-filling them. It seems utterly pointless but the grown-ups say it builds character. Eventually we learn that the holes aren’t pointless but in fact serve the interests of a few nasty grown-ups.

Speaking of holes, and pointless exercises, last month Reps. Sensenbrenner and Conyers introduced a bill, the Digital Transition Content Security Act, also known as the Analog Hole Bill.

“Analog hole” is an artfully chosen term, referring to the fact that audio and video can be readily converted back and forth between digital and analog formats. This is just a fact about the universe, but calling it a “hole” makes it sound like a problem that might possibly be solved. The last large-scale attack on the analog hole was the Secure Digital Music Initiative (SDMI) which went down in flames in 2002 after its technology was shown to be ineffective (and after SDMI famously threatened to sue researchers for analyzing the technology).

The Analog Hole Bill would mandate that any devices that can translate certain types of video signals from analog to digital form must comply with a Byzantine set of design restrictions that talk about things like “certified digital content rights protection output technologies”. Let’s put aside for now the details of the technology design being mandated; I’ll critique them in a later post. I want to write today about the bill’s exemption for “professional devices”:

PROFESSIONAL DEVICE.—(A) The term‘‘professional device” means a device that is designed, manufactured, marketed, and intended for use by a person who regularly employs such a device for lawful business or industrial purposes, such as making, performing, displaying, distributing, or transmitting copies of audiovisual works on a commercial scale at the request of, or with the explicit permission of, the copyright owner.

(B) If a device is marketed to or is commonly purchased by persons other than those described in subparagraph (A), then such device shall not be considered to be a ‘‘professional device”.

Tim Lee at Tech Liberation Front points out one problem with this exemption:

“Professional” devices, you see, are exempt from the restrictions that apply to all other audiovisual products. This raises some obvious questions: is it the responsibility of a “professional device” maker to ensure that too many “non-professionals” don’t purchase their product? If a company lowers its price too much, thereby allowing too many of the riffraff to buy it, does the company become guilty of distributing a piracy device? Perhaps the government needs to start issuing “video professional” licenses so we know who’s allowed to be part of this elite class?

I think this legislative strategy is extremely revealing. Clearly, Sensenbrenner’s Hollywood allies realized that all this copy-protection nonsense could cause problems for their own employees, who obviously need the unfettered ability to create, manipulate, and convert analog and digital content. This is quite a reasonable fear: if you require all devices to recognize and respect encoded copy-protection information, you might discover that content which you have a legitimate right to access has been locked out of reach by over-zealous hardware. But rather than taking that as a hint that there’s something wrong with the whole concept of legislatively-mandated copy-protection technology, Hollywood’s lobbyists took the easy way out: they got themselves exempted from the reach of the legislation.

In fact, the professional device hole is even better for Hollywood than Tim Lee realizes. Not only will it protect Hollywood from the downside of the bill, it will also create new barriers to entry, making it harder for amateurs to create and distribute video content – and just at the moment when technology seems to be enabling high-quality amateur video distribution.

The really interesting thing about the professional device hole is that it makes one provision of the bill utterly impossible to put into practice. For those reading along at home, I’m referring to the robustness rulemaking of section 202(1), which requires the Patent and Trademark Office (PTO) to establish technical requirements that (among other things) “can only with difficulty be defeated or circumvented by use of professional tools or equipment”. But there’s a small problem: professional tools are exempt from the technical requirements.

The robustness requirements, in other words, have to stop professional tools from copying content – and they have to do that, somehow, without regulating what professional tools can do. That, as they say, is a tall order.

That’s all for today, class. Here’s the homework, due next time:
(1) Table W, the most technical part of the bill, contains an error. (It’s a substantive error, not just a typo.) Explain what the error is.
(2) How would you fix the error?
(3) What can we learn from the fact that the error is still in the bill at this late date?