November 25, 2024

Kundra Named As Federal CIO

Today, the Obama administration named Vivek Kundra as the Chief Information Officer of the U.S. government, a newly created position.

This is great news. Kundra, in his previous role as CTO of the District of Columbia, made great strides in opening the DC government by publishing government data. When he spoke at our Thursday Forum last fall, everyone was impressed by how quickly and effectively he had transformed the DC government’s approach to technology.

First, he set up an open Data Catalog, where lots of data collected by the DC government is freely available in standard formats. Second, he ran the Apps for Democracy contest, in which he challenged citizens to develop applications to take advantage of all the data that the DC government is publishing. The results were impressive—with 47 different apps submitted by citizens—and also inexpensive.

Most impressively, in doing this he overcame the natural inertia of big city government. The Federal government will be even harder to budge, but with the right support from the top, Kundra could bring a new level of openness and tech-friendliness to the government.

New Podcast: CITP Conversations

Over the last few months, as the pace of activity at CITP has increased, we’ve fielded a growing number of requests from points around the web, and around the world, for podcasts and other ways to “attend” our events virtually. We hear you, and we’re working on it.

Today, I’m very pleased to announce a new CITP podcast, which will carry audio of some of our events as well as brief conversations recorded expressly for the podcast feed. Currently, those conversations include one with Paul Ohm on Net Neutrality and the Wiretap Act, and another with Ed on “Rebooting our Cyber-Security Policy,” drawing on the themes of his recent Thursday Forum talk.

We are also working to offer more of our events in video formats, and we remain open to exploring additional options. Stay tuned!

RIP Rocky Mountain News

The Rocky Morning News, Colorado’s oldest newspaper, closed its doors Friday. On their front page they have this incredibly touching video:

Final Edition from Matthew Roberts on Vimeo.

The closing of a large institution like a daily newspaper is an incredibly sad event, and my heart goes out to all the people who suddenly find their lives upended by sudden unemployment. Many talented and dedicated employees lost their jobs today, and some of them will have to scramble to salvage their careers and support their families. The video does a great job of capturing the shock and sadness that the employees of the paper feel—not just because they lost their jobs, but also because in some sense they’re losing their life’s work.

With that said, I do think it’s unfortunate that part of the video was spent badmouthing people, like me, who don’t subscribe to newspapers. One gets the impression that newspapers are failing because kids these days are so obsessed with swapping gossip on MySpace that they’ve stopped reading “real” news. No doubt, some people fit that description, but I think the more common case is something like the opposite: those of us with the most voracious appetite for news have learned that newsprint simply can’t compete with the web for breadth, depth, or timeliness. When I pick up a newspaper, I’m struck by how limited it is: the stories are 12 to 36 hours old, the range of topics covered is fairly narrow, and there’s no way to dig deeper on the stories that interest me most. That’s not the fault of the newspaper’s editors and reporters; newsprint is just an inherently limited medium.

As more newspapers go out of business in the coming years, I think it’s important that our sympathy for individual employees not translate into the fetishization of newsprint as a medium. And it’s especially important that we not confuse newsprint as a medium with journalism as a profession. Newsprint and journalism have been strongly associated in the past, but this an accident of technology, not something inherent to journalism. Journalism—the process of gathering, summarizing, and disseminating information about current events—has been greatly enriched by the Internet. Journalists have vastly more tools available for gathering the news, and much more flexible tools for disseminating it. The replacement of static newspapers with dynamic web pages is progress.

But that doesn’t mean it’s not a painful process. The web’s advantages are no consolation for Rocky employees who have spent their careers building skills connected to a declining technology. And the technical superiority of web will be of little consolation to Denver area readers who will, in the short run, have less news and information available about their local communities. So my thoughts and sympathy today are with the employees of the Rocky Mountain News.

The Future of Smartphone Platforms

In 1985, I got my very first home computer: a Commodore Amiga 1000. At the time, it was awesome: great graphics, great sound, “real” multitasking, and so forth. Never mind that you spent half your life shuffling floppy disks around. Never mind that I kept my head full of Epson escape codes to use with my word processing program to get what I wanted out of my printer. No, no, the Amiga was wonderful stuff.

Let’s look at the Amiga’s generation. Starting with the IBM PC in 1981, the PC industry was in the midst of the transition from 8-bit micros (Commodore 64, Apple 2, Atari 800, BBC Micro, TI 99/4a, etc.) to 16/32-bit micros (IBM PC, Apple Macintosh, Commodore Amiga, Atari ST, Acorn Archimedes, etc.). These new machines each ran completely unrelated operating systems, and there was no consensus as which would be the ultimate winner. In 1985, nobody would have declared the PC’s victory to have been inevitable. Regardless, we all know how it worked out: Apple developed a small but steady market share, PCs took over the world (sans IBM), and the other computers faded away. Why?

The standard argument is “network effects.” PCs (and to a lesser extent Macs) developed sufficient followings to make them attractive platforms for developers, which in turn made them attractive to new users, which created market share, which created resources for future hardware developments, and on it went. The Amiga, on the other hand, became popular only in specific market niches, such as video processing and editing. Another benefit on the PC side was that Microsoft enabled clone shops, from Compaq to Dell and onward, to battle each other with low prices on commodity hardware. Despite the superior usability of a Mac or the superior graphics and sound of an Amiga, the PC came away the winner.

What about cellular smartphones then? I’ve got an iPhone. I have friends with Windows Mobile, Android, and Blackberry devices. When the Palm Pre comes out, it should gain significant market share as well. I’m sure there are people out there who love their Symbian or OpenMoko phones. The level of competition, today, in the smartphone world bears more than a passing resemblance to the competition in the mid-80’s PC market. So who’s going to win?

If you believe that the PCs early lead and widespread adoption by business was essential to its rise, then you could expect the Blackberry to win out. If you believe that the software/hardware coming from separate vendors was essential, then you’d favor Windows Mobile or Android. If you’re looking for network effects, look no farther than the iPhone. If you’re looking for the latest, coolest thing, then the Palm Pre sure does look attractive.

I’ll argue that this time will be different, and it’s the cloud that’s going to win. Right now, what matters to me, with my iPhone, is that I can get my email anywhere, I can make phone calls, and I can do basic web surfing. I occasionally use the GPS maps, or even watch a show purchased from the iTunes Store, but if you took those away, it wouldn’t change my life much. I’ve got pages of obscure apps, but none of them really lock me into the platform. (Example: Shazam is remarkably good at recognizing songs that it hears, but the client side of it is a very simple app that they could trivially port to any other smartphone.) On the flip side, I’m an avid consumer of Google’s resources (Gmail, Reader, Calendar, etc.). I would never buy a phone that I couldn’t connect to Google. Others will insist on being able to connect to their Exchange Server.

At the end of the day, the question isn’t whether a given smartphone interoperates with your friend’s phones, but whether it interoperates with your cloud services. You don’t need an Android to get a good mobile experience with Google, and you don’t need a Windows Mobile phone to get a good mobile experience with Exchange. Leaving one smartphone and adopting another one is, if anything, easier than transitioning with a traditional not-smartphone, since you don’t have to monkey as much with moving your address book around. As such, I think it’s reasonable to predict, in ten years, that we’ll still have at least one smartphone vendor per major cellular carrier, and perhaps more.

If we have further consolidation in the carrier market, that would put pressure on the smartphone vendors to cut costs, which could well lead to consolidation of the smartphone vendors. We could certainly also imagine carriers pushing on the smartphone vendors to include or omit particular features. We see plenty of that already. (Example: can you tether your laptop to a Palm Pre via Bluetooth? The answer seems to be a moving target.) Historically, the U.S. carriers are somewhat infamous for going out of their way to restrict what phones can do. Now, that seems to be mostly fixed, and for that, at least, we can thank Apple.

Let a thousand smartphones bloom? I sure hope so.

Federal Health IT Effort Is Making Progress, Could Benefit from More Transparency

President Obama has indicated that health information technology (HIT) is an important component of his administration’s health care goals. Politicians on both sides of the aisle have lauded the potential for HIT to reduce costs and improve care. In this post, I’ll give some basics about what HIT is, what work is underway, and how the government can get more security experts involved.

We can coarsely break HIT into three technical areas. The first area is the transition from paper to electronic records, which involves surprisingly many subtle technical issues like interoperability. Second, development of health information networks will allow sharing of patient data between medical facilities and with other appropriate parties. Third, as a recent National Research Council report discusses, digital records can enable research in new areas, such as cognitive support for physicians.

HIT was not created on the 2008 campaign trail. The Department of Veterans Affairs (VA) has done work in this area for decades, including its widely praised VistA system, which provides electronic patient records and more. Notably, VistA source code and documentation can be freely downloaded. Many other large medical centers also already use electronic patient records.

In 2004, then-President Bush pushed for deployment of a Nationwide Health Information Network (NHIN) and universal adoption of electronic patient records by 2014. The NHIN is essentially a nationwide network for sharing relevant patient data (e.g., if you arrive at an emergency room in Oregon, the doctor can obtain needed records from your regular doctor in Kansas). The Department of Health and Human Services (HHS) funded four consortia to develop smaller, localized networks, partially as a learning exercise to prepare for the NHIN. HHS has held a number of forums where members of these consortia, the government, and the public can meet and discuss timely issues.

The agendas for these forums show some positive signs. Sessions cover a number of tricky issues. For example, participants in one session considered the risk that searches for a patient’s records in the NHIN could yield records for patients with similar attributes, posing privacy concerns. Provided that meaningful conversations occurred, HHS appears to be making a concerted effort to ensure that issues are identified and discussed before settling on solutions.

Unfortunately, the academic information security community seems divorced from these discussions. Whether before or after various proposed systems are widely deployed, members of the community are eventually likely to analyze them. This analysis would be preferable earlier. In spite of the positive signs mentioned, past experience shows that even skilled developers can produce insecure systems. Any major flaws uncovered may be embarrassing, but weaknesses found now would be cheaper and easier to fix than ones found in 2014.

A great way to draw constructive scrutiny is to ensure transparency in federally funded HIT work. Limited project details are often available online, but both high- and low-level details can be hard to find. Presumably, members of the NHIN consortia (for example) developed detailed internal documents containing use cases, perceived risks/threats, specifications, and architectural illustrations.

To the extent legally feasible, the government should make documents like these available online. Access to them would make the projects easier to analyze, particularly for those of us less familiar with HIT. In addition, a typical vendor response to reported vulnerabilities is that the attack scenario is unrealistic (this is a standard response of e-voting vendors). Researchers can use these documents to ensure that they consider only realistic attacks.

The federal agenda for HIT is ambitious and will likely prove challenging and expensive. To avoid massive, costly mistakes, the government should seek to get as many eyes as possible on the work that it funds.