December 26, 2024

Economic Growth, Censorship, and Search Engines

Economic growth depends on an ability to access relevant information. Although censorship prevents access to certain information, the direct consequences of censorship are well-known and somewhat predictable. For example, blocking access to Falun Gong literature is unlikely to harm a country’s consumer electronics industry. On the web, however, information of all types is interconnected. Blocking a web page might have an indirect impact reaching well beyond that page’s contents. To understand this impact, let’s consider how search results are affected by censorship.

Search engines keep track of what’s available on the web and suggest useful pages to users. No comprehensive list of web pages exists, so search providers check known pages for links to unknown neighbors. If a government blocks a page, all links from the page to its neighbors are lost. Unless detours exist to the page’s unknown neighbors, those neighbors become unreachable and remain unknown. These unknown pages can’t appear in search results — even if their contents are uncontroversial.

When presented with a query, search engines respond with relevant known pages sorted by expected usefulness. Censorship also affects this sorting process. In predicting usefulness, search engines consider both the contents of pages and the links between pages. Links here are like friendships in a stereotypical high school popularity contest: the more popular friends you have, the more popular you become. If your friend moves away, you become less popular, which makes your friends less popular by association, and so on. Even people you’ve never met might be affected.

“Popular” web pages tend to appear higher in search results. Censoring a page distorts this popularity contest and can change the order of even unrelated results. As more pages are blocked, the censored view of the web becomes increasingly distorted. As an aside, Ed notes that blocking a page removes more than just the offending material. If censors block Ed’s site due to an off-hand comment on Falun Gong, he also loses any influence he has on information security.

These effects would typically be rare and have a disproportionately small impact on popular pages. Google’s emphasis on the long tail, however, suggests that considerable value lies in providing high-quality results covering even less-popular pages. To avoid these issues, a government could allow limited individuals full web access to develop tools like search engines. This approach seems likely to stifle competition and innovation.

Countries with greater censorship might produce lower-quality search engines, but Google, Yahoo, Microsoft, and others can provide high-quality search results in those countries. These companies can access uncensored data, mitigating the indirect effects of censorship. This emphasizes the significance of measures like the Global Network Initiative, which has a participant list that includes Google, Yahoo, and Microsoft. Among other things, the initiative provides guidelines for participants regarding when and how information access may be restricted. The effectiveness of this specific initiative remains to be seen, but such measures may provide leading search engines with greater leverage to resist arbitrary censorship.

Search engines are unlikely to be the only tools adversely impacted by the indirect effects of censorship. Any tool that relies on links between information (think social networks) might be affected, and repressive states place themselves at a competitive disadvantage in developing these tools. Future developments might make these points moot: in a recent talk at the Center, Ethan Zuckerman mentioned tricks and trends that might make censorship more difficult. In the meantime, however, governments that censor information may increasingly find that they do so at their own expense.

The future of photography

Several interesting things are happening in the wild world of digital photography as it’s colliding with digital video. Most notably, the new Canon 5D Mark II (roughly $2700) can record 1080p video and the new Nikon D90 (roughly $1000) can record 720p video. At the higher end, Red just announced some cameras that will ship next year that will be able to record full video (as fast as 120 frames per second in some cases) at far greater than HD resolutions (for $12K, you can record video at a staggering 6000×4000 pixels). You can configure a Red camera as a still camera or as a video camera.

Recently, well-known photographer Vincent Laforet (perhaps best known for his aerial photographs, such as “Me and My Human“) got his hands on a pre-production Canon 5D Mark II and filmed a “mock commercial” called “Reverie”, which shows off what the camera can do, particularly its see-in-the-dark low-light abilities. If you read Laforet’s blog, you’ll see that he’s quite excited, not just about the technical aspects of the camera, but about what this means to him as a professional photographer. Suddenly, he can leverage all of the expensive lenses that he already owns and capture professional-quality video “for free.” This has all kinds of ramifications for what it means to cover an event.

For example, at professional sporting events, video rights are entirely separate from the “normal” still photography rights given to the press. It’s now the case that every pro photographer is every bit as capable of capturing full resolution video as the TV crew covering the event. Will still photographers be contractually banned from using the video features of their cameras? Laforet investigated while he was shooting the Beijing Olympics:

Given that all of these rumours were going around quite a bit in Beijing [prior to the announcement of the Nikon D90 or Canon 5D Mark II] – I sat down with two very influential people who will each be involved at the next two Olympic Games. Given that NBC paid more than $900 million to acquire the U.S. Broadcasting rights to this past summer games, how would they feel about a still photographer showing up with a camera that can shoot HD video?

I got the following answer from the person who will be involved with Vancouver which I’ll paraphrase: Still photographers will be allowed in the venues with whatever camera they chose, and shoot whatever they want – shooting video in it of itself, is not a problem. HOWEVER – if the video is EVER published – the lawsuits will inevitably be filed, and credentials revoked etc.

This to me seems like the reasonable thing to do – and the correct approach. But the person I spoke with who will be involved in the London 2012 Olympic Games had a different view, again I paraphrase: “Those cameras will have to be banned. Period. They will never be allowed into any Olympic venue” because the broadcasters would have a COW if they did. And while I think this is not the best approach – I think it might unfortunately be the most realistic. Do you really think that the TV producers and rights-owners will “trust” photographers not to broadcast anything they’ve paid so much for. Unlikely.

Let’s do a thought experiment. Red’s forthcoming “Scarlet FF35 Mysterium Monstro” will happily capture 6000×4000 pixels at 30 frames per second. If you multiply that out, assuming 8 bits per pixel (after modest compression), you’re left with the somewhat staggering data rate of 720MB/s (i.e., 2.6TB/hour). Assuming you’re recording that to the latest 1.5TB hard drives, that means you’re swapping media every 30 minutes (or you’re tethered to a RAID box of some sort). Sure, your camera now weighs more and you’re carrying around a bunch of hard drives (still lost in the noise relative to the weight that a sports photographer hauls around in those long telephoto lenses), but you manage to completely eliminate the “oops, I missed the shot” issue that dogs any photographer. Instead, the “shoot” button evolves into more of a bookmarking function. “Yeah, I think something interesting happened around here.” It’s easy to see photo editors getting excited by this. Assuming you’ve got access to multiple photographers operating from different angles, you can now capture multiple views of the same event at the same time. With all of that data, synchronized and registered, you could even do 3D reconstructions (made famous/infamous by the “bullet time” videos used in the Matrix films or the Gap’s Khaki Swing commercial). Does the local newspaper have the rights to do that to an NFL game or not?

Of course, this sort of technology is going to trickle down to gear that mere mortals can afford. Rather than capturing every frame, maybe you now only keep a buffer of the last ten seconds or so, and when you press the “shoot” button, you get to capture the immediate past as well as the present. Assuming you’ve got a sensor that let’s you change the exposure on the fly, you can also now imagine a camera capturing a rapid succession of images at different exposures. That means no more worries about whether you over or under-exposed your image. In fact, the camera could just glue all the images together into a high-dynamic-range (HDR) image, which yields sometimes fantastic results.

One would expect, in the cutthroat world of consumer electronics, that competition would bring features like this to market as fast as possible, although that’s far from a given. If you install third-party firmware on a Canon point-and-shoot, you get all kinds of functionality that the hardware can support but which Canon has chosen not to implement. Maybe Canon would rather you spend more money for more features, even if the cheaper hardware is perfectly capable. Maybe they just want to make common feature easy to use and not overly clutter the UI. (Not that any camera vendors are doing particularly well on ease of use, but that’s a topic for another day.)

Freedom to Tinker readers will recognize some common themes here. Do I have the right to hack my own gear? How will new technology impact old business models? In the end, when industries collide, who wins? My fear is that the creative freelance photographer, like Laforet, is likely to get pushed out by the big corporate sponsor. Why allow individual freelancers to shoot a sports event when you can just spread professional video cameras all over the place and let newspapers buy stills from those video feeds? Laforet discussed these issues at length; his view is that “traditional” professional photography, as a career, is on its way out and the future is going to be very, very different. There will still be demand for the kind of creativity and skills that a good photographer can bring to the game, but the new rules of the game have yet to be written.

Maybe "Open Source" Cars Aren't So Crazy After All

I wrote last week about the case for open source car software and, lo and behold, BMW might be pushing forward with the idea– albeit not in self-driving cars quite yet. 😉

Tangentially, I put “open source” in scare quotes because the car scenario highlights a new but important split in the open source and free software communities. The Open Source Initiative’s open source definition allows use of the term ‘open source’ to describe code which is available and modifiable but not installable on the intended device. Traditionally, the open source community assumed that if source was available, you could make modifications and install those modifications on the targeted hardware. This was a safe assumption, since the hardware in question was almost always a generative, open PC or OS. That is no longer the case- as I mentioned in my original car article, one might want to sign binaries so that not just anyone could hack on their cars, for example. Presumably even open source voting machines would have a similar restriction.

Another example appears to be the new ‘google phone’ (G1 or Android). You can download several gigs of source code now, appropriately licensed, so that the code can be called ‘open source’ under the OSI’s definition. But apparently you can’t yet modify that code and install the modified binaries to your own phone.

The new GPL v3 tries to address this issue by requiring (under certain circumstances) that GPL v3’d code be installable on devices with which it is shipped. But to the best of my knowledge no other license is yet requiring this, and the v3 is not yet widespread enough to put a serious dent in this trend.

Exactly how ‘open’ code like this is is up for discussion. It meets the official definition, but the inability to actually do much with the code seems like it will limit the growth of constructive community around the software for these types of devices- phones, cars, or otherwise. This issue bears keeping in mind when thinking about openness for source code of closed hardware- you will certainly see ‘open source’ tossed around a lot, but in this context, it may not always mean what you think it does.

Interesting Email from Sequoia

A copy of an email I received has been passed around on various mailing lists. Several people, including reporters, have asked me to confirm its authenticity. Since everyone seems to have read it already, I might as well publish it here. Yes, it is genuine.

====

Sender: Smith, Ed [address redacted]@sequoiavote.com
To: ,
Subject: Sequoia Advantage voting machines from New Jersey
Date: Fri, Mar 14, 2008 at 6:16 PM

Dear Professors Felten and Appel:

As you have likely read in the news media, certain New Jersey election officials have stated that they plan to send to you one or more Sequoia Advantage voting machines for analysis. I want to make you aware that if the County does so, it violates their established Sequoia licensing Agreement for use of the voting system. Sequoia has also retained counsel to stop any infringement of our intellectual properties, including any non-compliant analysis. We will also take appropriate steps to protect against any publication of Sequoia software, its behavior, reports regarding same or any other infringement of our intellectual property.

Very truly yours,
Edwin Smith
VP, Compliance/Quality/Certification
Sequoia Voting Systems

[contact information and boilerplate redacted]

Obama's Digital Policy

The Iowa caucuses, less than a week away, will kick off the briefest and most intense series of presidential primaries in recent history. That makes it a good time to check in on what the candidates are saying about digital technologies. Between now and February 5th (the 23-state tsunami of primaries that may well resolve the major party nominations), we’ll be taking a look.

First up: Barack Obama. A quick glance at the sites of other candidates suggests that Obama is an outlier – none of the other major players has gone into anywhere near the level of detail that he has in their official campaign output. That may mean we’ll be tempted to spend a disproportionate amount of time talking about him – but if so, I guess that’s the benefit he reaps by paying attention. Michael Arrington’s TechCrunch tech primary provides the best summary I’ve found, compiled from other sources, of candidates’ positions on tech issues, and we may find ourselves relying on that over the next few weeks.

For Obama, we have a detailed “Technology and Innovation” white paper. It spans a topical area that Europeans often refer to as ICTs – information and communications technologies. That means basically anything digital, plus the analog ambit of the FCC (media concentration, universal service and so on). Along the way, other areas get passing mention – immigration of high tech workers, trade policy, energy efficiency.

Net neutrality may be the most talked about tech policy issue in Washington – it has generated a huge amount of constituent mail, perhaps as many as 600,000 constituent letters. Obama is clear on this: He says requiring ISPs to provide “accurate and honest information about service plans” that may violate neutrality is “not enough.” He wants a rule to stop network operators from charging “fees to privilege the content or applications of some web sites and Internet applications over others.” I think that full transparency about non-neutral Internet service may indeed be enough, an idea I first got from a comment on this blog, but in any case it’s nice to have a clear statement of view.

Where free speech collides with child protection, Obama faces the structural challenge, common to Democrats, of simultaneously appeasing both the entertainment industry and concerned moms. Predictably, he ends up engaging in a little wishful thinking:

On the Internet, Obama will require that parents have the option of receiving parental controls software that not only blocks objectionable Internet content but also prevents children from revealing personal information through their home computer.

The idealized version of such software, in which unwanted communications are stopped while desirable ones remain unfettered, is typically quite far from what the technology can actually provide. The software faces a design tradeoff between being too broad, in which case desirable use is stopped, and too narrow, in which case undesirable online activity is permitted. That might be why Internet filtering software, despite being available commercially, isn’t already ubiquitous. Given that parents can already buy it, Obama’s aim to “require that parents have the option of receiving” such software sounds like a proposal for the software to be subsidized or publicly funded; I doubt that would make it better.

On privacy, the Obama platform again reflects a structural problem. Voters seem eager for a President who will have greater concern for statutory law than the current incumbent does. But some of the secret and possibly illegal reductions of privacy that have gone on at the NSA and elsewhere may actually (in the judgment of those privy to the relevant secrets) be indispensable. So Obama, like many others, favors “updating surveillance laws.” He’ll follow the law, in other words, but first he wants it modified so that it can be followed without unduly tying his hands. That’s very likely the most reasonable kind of view a presidential candidate could have, but it doesn’t tell us how much privacy citizens will enjoy if he gets his way. The real question, unanswered in this platform, is exactly which updates Obama would favor. He himself is probably reserving judgment until, briefed by the intelligence community, he can competently decide what updates are needed.

My favorite part of the document, by far, is the section on government transparency. (I’d be remiss were I not to shamelessly plug the panel on exactly this topic at CITP’s upcoming January workshop.) The web is enabling amazing new levels, and even new kinds, of sunlight to accompany the exercise of public power. If you haven’t experienced MAPlight, which pairs campaign contribution data with legislators’ votes, then you should spend the next five minutes watching this video. Josh Tauberer, who launched Govtrack.us, has pointed out that one major impediment to making these tools even better is the reluctance of government bodies to adopt convenient formats for the data they publish. A plain text page (typical fare on existing government sites like THOMAS) meets the letter of the law, but an open format with rich metadata would see the same information put to more and better use.

Obama’s stated position is to make data available “online in universally accessible formats,” a clear nod in this direction. He also calls for live video feeds of government proceedings. One more radical proposal, camoflaged among these others, is

…pilot programs to open up government decision-making and involve the public in the work of agencies, not simply by soliciting opinions, but by tapping into the vast and distributed expertise of the American citizenry to help government make more informed decisions.

I’m not sure what that means, but it sounds exciting. If I wanted to start using wikis to make serious public policy decisions – and needed to make the idea sound simple and easy – that’s roughly how I might put it.