November 21, 2024

OLPC: Too Much Innovation?

The One Laptop Per Child (OLPC) project is rightly getting lots of attention in the tech world. The idea – putting serious computing and communication technologies into the hands of kids all over the world – could be transformative, if it works.

Recently our security reading group at Princeton studied BitFrost, the security architecture for OLPC. After the discussion I couldn’t help thinking that BitFrost seemed too innovative.

“Too innovative?” you ask. What’s wrong with innovation? Let me explain. Though tech pundits often praise “innovation” in the abstract, the fact is that most would-be innovations fail. In engineering, most new ideas either don’t work or aren’t really an improvement over the status quo. Sometimes the same “new” idea pops up over and over, reinvented each time by someone who doesn’t know about the idea’s past failures.

In the long run, failures are weeded out and the few successes catch on, so the world gets better. But in the short run most innovations fail, which makes the urge to innovate dangerous.

Fred Brooks, in his groundbreaking The Mythical Man-Month, referred to the second-system effect:

An architect’s first work is apt to be spare and clean. He knows he doesn’t know what he’s doing, so he does it carefully and with great restraint.

As he designs the first work, frill after frill and embellishment after embellishment occur to him. These get stored away to be used “next time.” Sooner or later the first system is finished, and the architect, with firm confidence and a demonstrated mastery of that class of systems, is ready to build a second system.

This second is the most dangerous system a man ever designs. When he does his third and later ones, his prior experiences will confirm each other as to the general characteristics of such systems, and their differences will identify those parts of his experience that are particular and not generalizable.

The general tendency is to over-design the second system, using all the ideas and frills that were cautiously sidetracked on the first one. The result, as Ovid says, is a “big pile.”

The danger, in the second sytem, is the desire to reinvent everything, to replace the flawed but serviceable approaches of the past. The third-system designer, having learned his (or her – things have changed since Brooks wrote) lesson, knows to innovate only in the lab, or in a product only where innovation is necessary.

But here’s the OLPC security specification (lines 115-118):

What makes the OLPC XO laptops radically different is that they represent the first time that all these security measures have been carefully put together on a system slated to be introduced to tens or hundreds of millions of users.

OLPC needs to be innovative in some areas, but I don’t think security is one of them. Sure, it would be nice to have a better security model, but until we know that model is workable in practice, it seems risky to try it out on millions of kids.

Comments

  1. Jim Callahan says

    What about “Child Security?” I was thinking about starting a OLPC child project in the Paramore neighborhood of Orlando on a one-for-two model ($200 buys one laptop locally and one for the global project) complete with wireless. But, what about child predators? What responsibility do we have if we give a child a device that could be used by a predator to contact and possibly locate and track them? What if the communications network was adopted by street gangs?

    Before I approach possible sponsors, I would want to have good answers to those questions.

    Jim Callahan
    Orlando, FL
    March 23, 2007

  2. Sorry for commenting off the primary subject. OLPC is a great idea but I haven’t read about in-depth details thus not in a real good position to comment.

    Dan’s idea and Ping’s reply have grabbed my attention.

  3. Hi Ed,

    I’d be happy to speak with you or your reading group about the decisions I made when designing Bitfrost. Many of the comments in this response thread seem to me strangely off-base, as people discuss things like “finding ways to bypass the security system” despite Bitfrost being explicitly designed to grant full control to the user who wishes it, and the spec making this abundantly clear. Phrases such as ‘patronizing overlords’ make me believe the spec is either not being read by the commenters, or not being understood.

    The real innovation in Bitfrost lies in some of the glue that connects the key ideas, but the key ideas pointedly aren’t innovations, and themselves have been known for decades. We had a cast of some of the top experts across the security field here at OLPC last week for an internal security summit. Over-innovation was certainly not on the list of issues with Bitfrost that were voiced. I’d be very interested to hear more about your concern, and I should say I strongly disagree with your assertion that security is not an area where OLPC should be making changes. I’ll be in California in mid-April, and could make some time to meet up if you’d like to chat.

    Cheers,
    Ivan.

  4. Jim has a good example here of what is a design’s goal and how it’s applied. I don’t think Redmond’s server security team had “home server” in mind when they applied the default security policies for the installation. It was more targeted toward high security for corporations, with server/security specialists performing setups. Jim, don’t take this as a laugh. I am no developer, to each their own fields where we’re better.

    I am no Windows specialist, but I think the security is a little easier on Windows Workstation (less lock-down) and would have been easier to setup when was time to access the Mac.

    I understand your concern, and I hate when policies clashes like you just mentioned. Oddly enough, I had one of those today. I computer setup some time ago, in an insecure way that the IT director didn’t want anything to do with it (support). So odd, it’s on a dedicated internet access with no access to the domain what so ever (physically). Anyway, the point was that it’s also lock down with default policies (user can’t change date and time.. you see?) and when I brought the point that they needed the patch for daylight saving (because the time is currently bad, one hour behind), there was an argument that took more time that applying the said patch.

    My opinion? I’d take over the responsibility of maintaining the computer, even off my work time, because not doing so was stupid to begin with…

    You see, I’m not for security enforcement to a point it’s unusable.. It’s all common sense. All I ask, is for people to start giving a damn about what’s going on, be more aware. *NOT* answer ad spams selling viagra by email.. Hilarious, but if there are so many, it’s because some DO answer…

  5. Dan and Jerome above argue that the big problem with security is lack of user awareness. I submit that there’s often a more basic problem that must be solved before education will be effective: divergent priorities. Two examples follow.

    I work as a software developer for a large corporation. They have security people in their IT department whose job is to ensure that nothing bad happens. My job on the other hand is to get work done. They aren’t judged by the degreee to which they slow me down, and I’m not judged by the degree to which I practice “good security habits”. (This is true even though we both speak platitudes about the other at annual review time.) Things won’t get better until we both take a more global view of the world.

    As a second example, I run a Windows server at home; at installation time, it automatically installs a pretty tight security policy. There are ACLs on files, ACLs on shares, and a plethora of security policies. That’s good, until I try to access a file from my 13-year-old Macintosh, and it tells me “access denied”. I then get to flounder around turning off security policies at random until it works. Do I remember to turn back on the ones other than the last one? Do I even remember what they were?
    Here again, the motivations of the security deveopers at Microsfot are at odds with mine. They want to ensure that every bad access is disallowed (they have T-shirts that read “E_ACCESSDENIED? Good, I did my job”). I on the other hand want to ensure that every good access is allowed.

    The problem is exacerbated by actions that invoke the term “security” for policies that are not merely orthoganal to my needs, but actively antithetical. Frequently, security seems to mean keeping me from doing what I want to, if Hollywood doesn’t approve. Often, security policies foised on us are merely security theater, designed to remove heat from some group. (Does anyone really think DOT’s war on liquids makes us safer?)

    In short, I think people will accept secuity measures with slight negative impact if they know that the measures will make them safer. However, they’ve been sold so much snake oil that they’re naturally (and rightfully) suspicious.

  6. A chain is as strong as its weakest link.

    In security, the weakest link is too often the user, sadly. As Dan said, too many users don’t care about their data security, beside the money. Don’t ask for a bank card’s PIN, but too many are willing to give your their computer access and email passwords so it’s more convenient…

    Sometime I do, on purpose, walk around -without my security card- on my first day of assignment/job. I get access to people’s computer just by asking, telling I work for IT and need to perform an update. No verification, no questions. Some people are so eager to fall within the routine, if they usually let the computer guy work alone, they did leave and let me all alone with their access. Worst, some people who introduced security measures do NOT even follow them… Am I supposed to lecture the IT CO on security?? I guess so — I do — but should I, really?

    A matter of education, but to which extent? Most people are not *aware*, they could fit the ‘cattle’ reference ‘The Plague’ did in the movie ‘Hackers’. They keep their passwords to themselves because they were told to, not because they understand the implication. But it’s a start — they keep the password to themselves. Some others don’t even care. I do all I can to educate people, I explain more that I need to in order to be sure they understand — not that they simply remember.

    I know tt seems harsh, I am sorry. Sad, but true.

  7. Dan,

    > The OLPC model appears to assume that users want to be gatekeepers for their applications, deciding exactly what information to give them access to.

    I think you’ve misread the spec — the whole point, as described, is that users can’t be expected to make rational security decisions. The defaults are sane, bundles don’t ship with more permissions than they need, and users shouldn’t need to make security decisions unless they’re actively trying to give more permissions to a bundle than it asked for.

  8. Ping, I’m not “blaming the user”–I’m simply laying out the problem that we usable security system designers face. If users were more tolerant of even mild inconveniences for the sake of security, then our jobs would be immeasurably easier. But because users routinely accept huge security risks–*including *glaringly self-evident ones*–for the sake of minor conveniences, we have an enormously difficult task in front of us.

    In fact, I used to believe, as you apparently still do, that if users couldn’t manage their information security properly, then it was only because we had failed to present them with a sufficiently clear, simple model that would allow them to make reasonable, considered, safe security decisions. (Indeed, I believe I was one of the first people to articulate that position.)

    Since then, I’ve been inundated with so much evidence that users not only don’t understand, but don’t even very much care about, their information security that I’ve pretty much given up on trying to help them protect their privacy or data in general, and am concentrating instead on helping them protect (only) that which they really care about–which I’m guessing includes their money, and probably not much more. (And even that task is proving to be quite daunting, given users’ low tolerance for even the slightest inconvenience.)

    The OLPC model appears to assume that users want to be gatekeepers for their applications, deciding exactly what information to give them access to. I claim that that assumption is deeply mistaken, and that users and applications will end up conspiring to fatally undermine any security model based on it. Indeed, that is precisely what is happening now, with Web 2.0. Why would OLPC be any different?

  9. Dan Simon writes: “We security folks are well aware that the number one difficulty hampering strong security is the user who doesn’t want to make even the smallest sacrifice of time, effort or convenience for its sake.”

    Dan, I respectfully request that you not speak for all “security folks” — or who is this “we” you refer to? I consider the above-claimed belief flat wrong. Blaming the user is not a productive way to approach security problems; it is merely a denial of responsibility. The difficulty is a combination of technical approaches to security and human interfaces for security that make it impossible or unlikely for users to make safe decisions. We can and should do better.

  10. Of course, one answer to my previous question might be that any kind of authentication system that requires the child to be present, or to take some specific action, could, in certain of the countries targeted by OLPC, put that child at considerable physical risk. Rather than just stealing the device, a thief might be motivated to force the child to participate in using the laptop to do any of a variety of illegal activities, which might result in physical or other harm to the child or his/her family.

  11. It seems that the biggest vulnerability is the fact that the primary backups store the kids’ secret keys in unencrypted form. This then is compounded by the fact that the keys themselves are unencrypted, both on the server-side and on the X0 laptops themselves. Putting aside that my kids, at ages where they were just learing to type, never had a problem with remembering a password, why not incorporate a biometric ID mechanism in lieu of a password? OK, so if a fingerprint scanner would add too much to the cost, why not do face recognition (via the Media Lab’s eigenfaces, or some other similar tech), using the built-in camera that’s already there. The software nedded to drive such a mechanism would add no additional cost. Even a simple recognition algorithm, one that, say, falsely authenticates 20% or 30% of the faces presented to it, would be far better than keeping these key pieces of data in unencryptyed form.

  12. I concur with Sam on this particular point : if users want to bypass the security model created by their patronizing overlords, they will find a way to do it. Let’s face it. We’re talking about giving “highly secure” laptops to children, who are the most innovative individuals in human society. If these things don’t teach kids about hacking, I don’t know what will (old Bell System circuit switches? Steam tunnels?). I’m not worried about the kids who find a way to do what they want with their laptops. I’m concerned about the response by their “parents”. However it works out, I’m sure the kids will out-innovate us. Like child-proof lighters and adult-only material in general, if the kids want it, they’ll find a way to get it, and learn something useful in the process. We just have to be tolerant when it creates a phenomenon that we had not been expecting.

  13. It’s interesting that you see the OLPC security model as incorporating potentially “too much innovation”. As I read it, it’s more of an attempt to undo the last 15-20 years of innovation in client-side application integration, by setting strict rules that isolate applications from each other and from non-application-specific user data. The goal appears to be to force applications to imitate old-style, environment-oblivious DOS and UNIX applications that were written before elaborate application integration technologies were available.

    I completely understand this motivation–application integration has been, without question, a catastrophe for client security. The problem is that the genie is out of the bottle, and users are generally quite delighted with the wishes it grants. If the OLPC client disables the kind of integration that users apparently crave, then application writers will simply figure out a way around the restrictions, in order to satisfy their users.

    In fact, we’re seeing precisely that phenomenon today, with “Web 2.0”. As applications migrate to the Web, developers of both applications and browsers are clamoring for a way to allow browser-hosted widgets to expose functionality to each other–in flagrant violation of the strict domain isolation rules that have evolved in the browser to protect users from security and privacy violations by malicious Websites. Similarly, OLPC applications could end up simply jumping from the segregated OLPC application model to some OLPC-compatible browser’s integrated Web 2.0 security model, if that’s what users prefer. I see nothing in the OLPC security document that could possibly stop this.

    We security folks are well aware that the number one difficulty hampering strong security is the user who doesn’t want to make even the smallest sacrifice of time, effort or convenience for its sake. I’m a bit surprised at the extent to which the OLPC security people seem to have assumed that they can inconvenience OLPC users by fiat, by constraining their applications so drastically.

  14. The reason the olpc needs a completely new security model is that children aren’t mentally equipped to deal with password-style security, and these machines will have networking at a more fundamental level than any other system out there. There’s nothing really new in the idea that applications will each have their own sandbox, that they won’t have network access by default, etc.

    If you wish to criticize the security model, perhaps you should address Bitfrost on its merits rather than making sweeping generalizations about innovation?

  15. “[I]t seems risky to try it out on millions of kids” is right on. The creators of these new computers need to consider their target markets. The countries where they plan on distributing them do not have a massive IT infrastructure. They don’t have well-staffed help desks, nor ways to push down updates. If something goes awry, will there be anybody to call? Imagine the specter of 20 million $100 doorstops (and a lot of children singed by their first experiences with technology.)

    Alternately, this new security model could turn into vaporware of Redmondian proportions. How many years has Microsoft been promising a radically new, database-driven file system, only to rename, postpone, scale down, and ultimately withdraw it because of unforeseen problems?

  16. The response to your last sentence is presumably that we know the current model puts millions of kids at unacceptable risk. Still, your general point about innovation and failure is well taken.

    Any chance of publishing notes from your reading group?

  17. As an Architect* (that is bricks and mortar, not software) I would note that the lesson that Fred Brooks relates is even translatable to my field.

    One thing I find fascinating is the frequent occurrance of aesthetic descriptions when describing software design. “..spare and clean..with great restraint…frill after frill and embellishment after embellishment….”

    Especially when I happen across the window manager blackbox, I felt sure that aesthetic motivations were no small factor in writting code, and in chosing what to develop, esp. in the FOSS world.

    * note capital A

  18. Isn’t this one of the major things that puts regularly military contracts way over time and budget? Changes after the specs are written are another (and also mentioned, if I remember correctly, in Brooks’ book).