June 16, 2024

The future of photography

Several interesting things are happening in the wild world of digital photography as it’s colliding with digital video. Most notably, the new Canon 5D Mark II (roughly $2700) can record 1080p video and the new Nikon D90 (roughly $1000) can record 720p video. At the higher end, Red just announced some cameras that will ship next year that will be able to record full video (as fast as 120 frames per second in some cases) at far greater than HD resolutions (for $12K, you can record video at a staggering 6000×4000 pixels). You can configure a Red camera as a still camera or as a video camera.

Recently, well-known photographer Vincent Laforet (perhaps best known for his aerial photographs, such as “Me and My Human“) got his hands on a pre-production Canon 5D Mark II and filmed a “mock commercial” called “Reverie”, which shows off what the camera can do, particularly its see-in-the-dark low-light abilities. If you read Laforet’s blog, you’ll see that he’s quite excited, not just about the technical aspects of the camera, but about what this means to him as a professional photographer. Suddenly, he can leverage all of the expensive lenses that he already owns and capture professional-quality video “for free.” This has all kinds of ramifications for what it means to cover an event.

For example, at professional sporting events, video rights are entirely separate from the “normal” still photography rights given to the press. It’s now the case that every pro photographer is every bit as capable of capturing full resolution video as the TV crew covering the event. Will still photographers be contractually banned from using the video features of their cameras? Laforet investigated while he was shooting the Beijing Olympics:

Given that all of these rumours were going around quite a bit in Beijing [prior to the announcement of the Nikon D90 or Canon 5D Mark II] – I sat down with two very influential people who will each be involved at the next two Olympic Games. Given that NBC paid more than $900 million to acquire the U.S. Broadcasting rights to this past summer games, how would they feel about a still photographer showing up with a camera that can shoot HD video?

I got the following answer from the person who will be involved with Vancouver which I’ll paraphrase: Still photographers will be allowed in the venues with whatever camera they chose, and shoot whatever they want – shooting video in it of itself, is not a problem. HOWEVER – if the video is EVER published – the lawsuits will inevitably be filed, and credentials revoked etc.

This to me seems like the reasonable thing to do – and the correct approach. But the person I spoke with who will be involved in the London 2012 Olympic Games had a different view, again I paraphrase: “Those cameras will have to be banned. Period. They will never be allowed into any Olympic venue” because the broadcasters would have a COW if they did. And while I think this is not the best approach – I think it might unfortunately be the most realistic. Do you really think that the TV producers and rights-owners will “trust” photographers not to broadcast anything they’ve paid so much for. Unlikely.

Let’s do a thought experiment. Red’s forthcoming “Scarlet FF35 Mysterium Monstro” will happily capture 6000×4000 pixels at 30 frames per second. If you multiply that out, assuming 8 bits per pixel (after modest compression), you’re left with the somewhat staggering data rate of 720MB/s (i.e., 2.6TB/hour). Assuming you’re recording that to the latest 1.5TB hard drives, that means you’re swapping media every 30 minutes (or you’re tethered to a RAID box of some sort). Sure, your camera now weighs more and you’re carrying around a bunch of hard drives (still lost in the noise relative to the weight that a sports photographer hauls around in those long telephoto lenses), but you manage to completely eliminate the “oops, I missed the shot” issue that dogs any photographer. Instead, the “shoot” button evolves into more of a bookmarking function. “Yeah, I think something interesting happened around here.” It’s easy to see photo editors getting excited by this. Assuming you’ve got access to multiple photographers operating from different angles, you can now capture multiple views of the same event at the same time. With all of that data, synchronized and registered, you could even do 3D reconstructions (made famous/infamous by the “bullet time” videos used in the Matrix films or the Gap’s Khaki Swing commercial). Does the local newspaper have the rights to do that to an NFL game or not?

Of course, this sort of technology is going to trickle down to gear that mere mortals can afford. Rather than capturing every frame, maybe you now only keep a buffer of the last ten seconds or so, and when you press the “shoot” button, you get to capture the immediate past as well as the present. Assuming you’ve got a sensor that let’s you change the exposure on the fly, you can also now imagine a camera capturing a rapid succession of images at different exposures. That means no more worries about whether you over or under-exposed your image. In fact, the camera could just glue all the images together into a high-dynamic-range (HDR) image, which yields sometimes fantastic results.

One would expect, in the cutthroat world of consumer electronics, that competition would bring features like this to market as fast as possible, although that’s far from a given. If you install third-party firmware on a Canon point-and-shoot, you get all kinds of functionality that the hardware can support but which Canon has chosen not to implement. Maybe Canon would rather you spend more money for more features, even if the cheaper hardware is perfectly capable. Maybe they just want to make common feature easy to use and not overly clutter the UI. (Not that any camera vendors are doing particularly well on ease of use, but that’s a topic for another day.)

Freedom to Tinker readers will recognize some common themes here. Do I have the right to hack my own gear? How will new technology impact old business models? In the end, when industries collide, who wins? My fear is that the creative freelance photographer, like Laforet, is likely to get pushed out by the big corporate sponsor. Why allow individual freelancers to shoot a sports event when you can just spread professional video cameras all over the place and let newspapers buy stills from those video feeds? Laforet discussed these issues at length; his view is that “traditional” professional photography, as a career, is on its way out and the future is going to be very, very different. There will still be demand for the kind of creativity and skills that a good photographer can bring to the game, but the new rules of the game have yet to be written.


  1. Thank Dan Wallach for bringing out this kind of topic.

    From time to time I like to speculate about the future of photography, and how new innovative technologies will provide the photographer with tools heretofore never imagined.

    Imagine that your camera suddenly is 50 to 500 times more sensitive to light (See article here). Imagine a camera with a fixed lens that one could set to any focal length instantly (See article here). Imagine cameras that can record 200,000,000 frames per second. Imagine a SD card that holds 2TB of data. Imagine a camera powered by fuel cells, not by batteries…. on and on … just Imagine.

    – Rebecca Jordan, Webmaster of Social Networking Blog.

  2. The way you mention media requirements makes it sound like swapping a hard drive every 30 minutes is onerous. Although it might seem that way to a still photographer, used to carrying around a days’ worth of media (in the form of memory cards or 35mm film) in a camera bag, it’s no big deal to a videographer. In fact, swapping a hard drive every 30 minutes is as much as a 50% step up from traditional tape systems.

    The de facto standard for analog tape was (and still is, to a great extent) Betacam SP. Although Beta SP comes in several different physical formats, typically the tapes used in cameras only hold about 20 minutes of video, and it’s not unusual to see camera crews show up with *cases* of tapes per camera. These tapes are actually not that much cheaper than an inexpensive 3.5″ hard drive, either (especially if you factor in a few re-uses of the HDDs, which you really don’t want to do with tape, and add the 50% greater capacity of a 30-minute HDD versus a 20-minute tape); within the next few years I suspect they’ll be on par.

    Although I’d expect most big events like the Olympics and professional sports games to use cameras tethered to a central production facility (because they’re being broadcast live, obviously) with the recording done there, the data rate and storage requirements are not at all impractical for a standalone camcorder-type unit.

  3. What legal case? If I buy a Canon camera I own it. If I then modify it, Canon has nothing to say about it.

  4. Hacker Wanted says

    There’s an interesting freedom-to-tinker issue related to the Canon EOS 5D Mark II. Filmmakers are excited about the huge sensor (it is exponentially bigger than that of any video camera in the price range) and the ability to use 35mm still lenses at full frame (theoretically, the shallow depth of field would trump even the Super 35 format used by Hollywood).

    But the Canon 5D2 has a frame rate of 30fps only, and it lacks any significant manual control over exposure settings in video mode. Many of us suspect that Canon could enable 24fps (simulates film) and 25fps (PAL compatible) with a simple firmware tweak. And it’s almost certain that Canon has gone out of its way to rob the camera’s video mode of the manual exposure control that is present for still photography. It seems Canon was afraid to make the camera “too good” — and thus threaten their existing large and profitable prosumer video line.

    Over at the new forum cinema5D.com they are working on what they actually call The Least Comical Workaround:


    But a site called Canon Hack just showed up trying to raise money to reward a hacker for figuring out a firmware update:


    I have no idea what it would take to do this, but I imagine Canon would not be happy about it.

    However, Canon is guilty of misleading marketing at present. They specifically brag about how the full-frame sensor will give you better-than-Hollywood shallow depth of field. But you actually can’t get that benefit in most situations without being able to control aperture. Canon advertises as built-in what will be for many users a largely theoretical benefit — because there’s no simple way to access it.

    Does it help any legal case that all this hack would do is give the camera a feature that Canon claims it already has?

  5. Now I’m wondering what crowdsourced image synthesis would look like. Depending on the scale of the event, you might start with gps or cellular or wifi/bluetooth triangulation for a first cut at camera locations. The data-handling problem would be enormous, but it would pretty much offer people, years later, an opportunity to relive events that were important to them.

  6. Those who purchase the TV rights should be relaxed about this.

    A photographer on the touchline just does not have the viewpoint to produce usable coverage.

    An image based rendering system could in theory be used to synthesize a better angle but it won’t compete with the proper coverage without access to the “proper” viewpoint. (and anyway I reckon it is not really practical without a degree of organisation that would be blatantly obvious to the event organisers)

    Of course the “rights holder” will probably be paranoid about this as usual – but they are highly unlikely to lose money from it.

    In any case I am not sure about the legal status of all these coverage rights I have a feeling that according to a strict interpretation of copyright law they actually don’t exist. Sports event organisers can of course negotiate who they let into the stadium, and set up contracts with them – but if you fly over the stadium in a balloon and film from there then you own the copyright. A sports event is a historical event – not a creative work.

  7. Lawrence D'Oliveiro says

    Don’t forget the rise of the ubiquitous consumer camera, either built into a cell-phone or on its own, small enough to conceal in a pocket or a wallet or elsewhere. How will the venue managers keep these out? They could render the whole concept of “exclusive” photo or video rights moot.

    Ultimately I think this will have more impact than arguments over the capabilities of professional-oriented gear.

  8. Rather than “video” the whole sports event, with the shutter button acting as a bookmark feature, the camera could constantly keep a N-second temporary cycling buffer and the shutter button acting as “save the last N seconds and next N seconds to the memory card”. You just keep your camera trained on the action and press the button when something interesting is happening.

  9. Pascal Scheffers says

    The ‘4K’ Red ONE camera shoots Red RAW at 36MB/sec (megabyte per second). It stands to reason that three times more pixels would not take up 20 times more data. The Scarlet FF35 Mysterium Monstro will probably have a recording rate called REDCODE 120 or a number close to it. This is still a stunning firehose of data, of course. You’d be cropping and/or down sampling this enormously. This is one of the points of these cameras, you have something extra to work with in post production.

    RED uses Red Code RAW, which significantly compresses the video stream. It uses a wavelet codec, which is slighlty lossy, but less so than others. Actually capturing full lossless information may be interesting for scientific applications, and is perhaps possible with external recorders. However, as you mentioned, the true, uncompressed data stream is too unwieldy. Because they’re also recording at 12 bits per channel on the Red ONE, not 8, and the one we’re talking about now is recording at 16 bits per channel.

  10. To me, adding video capture to a digital SLR is possibly one of this silliest examples of feature creep I’ve ever seen.

    You’re not really “adding” it; it’s just a natural extension of burst shooting. You want to be able to do several frames per second for action shots — a long-time SLR feature — and you get to 5 or 10 per second and you’re nearly at 15/sec for some half-assed video. A few more frames per second and you’re at 30/sec. and you’re doing video, really good video if you have a good SLR. Or really useful still photography in burst mode. Which are you doing at that point, burst mode still photography or video? Do you want to limit your burst mode still photo utility so you aren’t “doing video”?

    The thing is it’s become an artificial boundary and it will be even more artificial a boundary in the future.

  11. The Casio EX-F1 is a high end non DSLR that already has a shooting mode where it constantly buffers 60 full resolution shots at varying speeds (from 1 to 60fps per second) in order to provide you with a set of shots that bracket the point in time where you actually pressed the shutter.
    More here:
    It also throws in a range of high speed shooting modes up to 1200fps although at those high speeds the resolution is dramatically reduced.

    The other thing to remember is that storage media have been increasing in density and reducing in cost at rates that are really astounding over the last few years. A Terabyte of hard drive storage costs less than $150 these days, and on the Solid state front a 250GB SSD drive is less than $900 these days. For hard drives the pricecapacity ratio is halving every 10 months or so and on the Solid State front it’s been halving about every 6 months over the past few years. The recession might slow things down but it’s entirely reasonable to foresee HDD Storage costing <$5 per TB by 2012 with the high end 2.5" drives probably holding around 10-20TB in a single unit by then. That is if rotating media HDD's haven't already been totally wiped out by Solid State storage which should cost about $1 per TB at around the same time. It's not at all unlikely that high end pro camera systems will be capable of carrying storage internally capable of saving something in the order of 10 terabyte's of imageryvideo in that time frame. Certainly by the time the 2016 games come around it is certain that even low end consumer cameras will have enough internal storage to be able to save many hours of video at resolutions way beyond the resolutions and frame rates required for HD playback.

    • Like you said, the other thing to remember is that storage media have been increasing in density and reducing in cost at rates that are really astounding over the last few years. A Terabyte of hard drive storage costs less than $150 these days, and on the Solid state front a 250GB SSD drive .

      Open a regular bank accounts and saving accounts

  12. My estimates are assuming you want the equivalent of a “raw” image for each and every frame. That yields maximum flexibility and quality, at the cost of higher disk space usage. Certainly, if you drop down to JPEG, then you’re talking about one bit per pixel rather than eight, and you could also drop from 24 megapixels to six, still yielding plenty of quality. At that point, we’re talking about 22MB/s or 79GB/hour, at which point a 1.5TB hard drive can hold 18 hours of video — a solid day’s shooting.

    The review time is an interesting question. That’s where my idea of “bookmarking” comes into play. In a certain sense, part of the job of the photographer is not only that they frame a shot, but that they press the button at the critical moment in time to get the winner. Sports photographers’ cameras hum along around 10 frames/sec (plus or minus), and a common strategy is to mash the button down when something cool is about to happen to make sure you get it. That same UI turns into a bookmark within the video stream. Later on, just like today, a photo editor would start with the bookmarked areas and then select the specific shots that best capture the critical moment. That process is really no different from how editors work today. What’s new is that the editor could say “give me every view of the field at precisely the same time as this” and then hand things off to some kind of sophisticated software tools to build 3D reconstructions.

    • That makes sense, but it also goes against the notion of just taking the feed from video cameras. In many/most cases the still and motion camera operators are going for very different things in their images, and have seriously different constraints on position, movement, and technique. The video folks in particular have to be producing interesting, watchable (no vertigo, no blanks) footage at all the times that aren’t the decisive moment. In an ideal world you’d get the still and video folks to work together, but initially remember the old line about every big technical advance in hardware setting the software back ten years…

      I think things might particularly get interesting when there are enough cameras, plus good motion-tracking software, so that you can not only do the 3-D reconstruction but effectively synthesize camera positions and angles.

  13. To me, adding video capture to a digital SLR is possibly one of this silliest examples of feature creep I’ve ever seen.

    There is a reason why the cameras used by news crews and what-have-you pretty much all conform to the same shape (one that lets you carry and shoot with the camera on your shoulder), and that’s stability. If you got a photographer with a video-capable digital SLR to shoot video and he’s using a long lens ( like most sports photographers I’ve seen ), good luck getting a stable clear shot.

    Plus, I don’t think I’d want to use a digital SLR to shoot video. This is partially because I don’t think that the sensor is really built for it — I’d think that a digital SLR’s sensor would be geared towards taking one picture at as high a resolution as possible really, really quickly than recording long continuous stream of pictures.

    Maybe this is just some sort of weird feature creep from point-and-shoots to digital SLR cameras. I don’t know if anybody out there has actually been asking for something like this, but I don’t think it’s a feature I’d ever use. Especially since some new SLR’s that support video ( like the Nikon D90 ) don’t support auto-focus while in video mode, and you can’t use the viewfinder — the mirror is up to expose the sensor!

    Anyways. That’s just my view. If I was going to shoot video, I’d have saved up for a camera like my buddies high definition several thousand dollar camera, and not a Nikon D80.

    Oh, for a review ( by someone I trust, at least ) on the video features of the Nikon D90, check Ken Rockwells site: http://kenrockwell.com/nikon/d90.htm#hd


    • The reason for “why” is larger sensor sizes, which leads to two things which make for more potential:
      – Shallower depth of field
      – Better low light performance

      The D90 and 5d mk2 are first generation implementations. The D90 is a consumer product, and the 5d mk2 is a (not-yet-released) prosumer camera. They are superior to typical consumer video cameras in some ways, and inferior in others. The D90 is interesting in that I would expect all consumer DSLRs to offer video capability in the next couple years. The 5d mk2 is interesting because as Laforet showed, it can be used to create video that is equal to the highest end of movie cameras.

      This generation might not interest you, but the second or third will have great improvements, and prosumer hd video cameras will face a lot of pressure to match them and lower their prices.

      I think a more relevant aspect for Freedom to Tinker to cover would be what happens when everyone is wearing a video camera that is on and recording 24/7. What implications does that have on privacy, on our memory, and our lives?

  14. This is video you’re talking about, so the compression ratios can be rather higher. Back of the envelope says you should be able to do 150-200 MB/S, which would be more like 2-3 hours on a 1.5-TB disk (or its customized N-wide equivalent). And figure that the cameras and components will be half the dimensions they are now. By 2012 the only way to ban such equipment will be with cavity searches and a team of forensic EEs. So contracts will have to be rewritten.

    If still-picture outlets do start using video feeds, expect quality to suffer significantly, not because of any loss of pixels, but because no one has time to review that many frames.