THE SPEED OF CHANGE

Posted on Apr 11, 2013 in Blog

I’ll be the first to admit, that I am not a prolific blogger, but I was nonetheless astounded that only a few months after my first blog entry, just as I was beginning to think about the second, I was sitting in a demonstration at which Sony was introducing the next generation of 4K cameras in its line up, the F55 and the F5. I felt like I had just written about my experiences with the F65, a brand new camera at the time, and here was already the follow up. And not just that, Sony addressed some of the very issues I had brought up in regards to the F65, by introducing a much smaller, lighter and yet more affordable 4K camera with onboard 4K RAW recording capability. By the time I’m actually writing this, Sony is already delivering the F55 to its first customers and I have seen it in the field, which tells you not only about my admittedly slow writing pace, but also about the incredible speed at which new camera technology is being developed and deployed.

Coincidentally, I had been asked by my friend Garrett Smith, who was preparing a presentation for the American Film Institute, to assist him with the graphic design of charts demonstrating exactly this point, so the issue of the changing pace of technological development and, equally important, acceptance of this new technology, became the theme that muscled its way into this blog entry and on which I will share my own thoughts as well Garrett’s. Garrett Smith, for those who don’t know, was in charge of Paramount Pictures mastering for most the past 25 years and, in the few years before he retired, the VP in charge of Production Technology for Paramount Pictures Corporation. We met in the late 80’s, when he had just taken over the mastering department and was setting up the video transfer for U2: Rattle and Hum. I was a young cinematographer, just out of USC Film School, working at a real Hollywood studio for the first time.

I’ll digress here for a moment, because I like sharing the story of our first meeting, when Garrett asked me which elements from the film I wanted to use for the video transfer. I had approached the meeting with a fair amount of apprehension, because I was convinced the studio would want me to use the intermediate lo-con print as the transfer source (a common practice at the time) and I really didn’t like using lo-con prints as transfer sources at all. To my great surprise, Garrett didn’t tell me what to use, but asked me what I wanted to use. We immediately launched into a discussion of the pros and cons of different transfer elements and both had to admit, that with the many sources we had: 16mm B&W original negative, 35mm B&W original negative, 35mm Color original negative, 35mm B&W blow up negative, as well as lo-con IP for each of these elements, dupe negative for all of those and release print for all those as well, we had no idea what the best element in each case might be, though we both felt it probably wouldn’t be the lo-cons.

After we did extensive tests using the various film elements, Garrett put me in a room at one of the best transfer houses in town, left me with all the elements I had selected and told me to give him a call when I had something to show. I asked him how much time I had and when he was expecting the call, to which his answer was: “When you’re happy.” A studio executive who knew his stuff, cared about whether I was happy and didn’t impose any time restrictions on the transfer – even then I knew that this was special and I probably wouldn’t have such an experience ever again. Naturally, Garrett invited Jordan Cronenweth, the legendary cinematographer of Blade Runner, to come in and transfer the color footage, which he had shot, himself. (Many years later, I was able to re-master those same elements, which had been carefully preserved in Paramount’s Archive Building, with Donald Freeman at Sunset Digital to HDTV. Needless to say, this was once again at the invitation of Garrett, practically the only studio executive in the business, who would always call the director and the DP to be in the room, even if a transfer was done years later. At that time, Jordan had already passed and I did my best to do justice to his intent.) Suffice it to say, we are still friends and Garrett has been at the forefront of technological development and an expert of post-production for all this time. I should probably dedicate an entire blog entry to him, but for now I’ll just say that this one is based on both of our thoughts about the matter and the charts (which I hope he will let me share) are definitely his.

From the invention of the motion picture camera, roughly in 1890, until approximately 90 years later in 1980, very little changed about the basic model of motion picture acquisition and exhibition: There was a camera, which exposed film that was later developed in a laboratory, physically cut and spliced and eventually projected in a theater. Sure, there were technological advancements along the way, when cameras got motors and did not need to be hand cranked any longer, or when those motors became constant enough, that they could be synchronized with sound recordings. Color, wide screen formats and all kinds of specialty cameras were developed to improve the capabilities of the equipment and enhance the picture, but the very basic idea, that a physical piece of film had to be run through a mechanical camera to expose a photo-chemical emulsion, then shipped to a lab to be developed, after that copied (printed) for review and editing and later conformed to the edited print, so it could be copied again and that print then run through a projector – all that stayed the same for nearly 100 years, and the development of the technology dealt solely with improving the functionality and capability of the equipment.

graphic1

What changed starting around 1980 was, that this very model of film production was being challenged by a digital revolution, which first took hold in the post-production area and, not until more than a decade later, on the production side of motion pictures as well. While there had been a marked improvement in editing technology before, when the flat bed was invented (it allowed for faster editing and multiple people to watch cuts in the editing room), all film still had to be physically cut and spliced back together to be projected and shown. Multiple versions, while possible with multiple work prints, were pretty much unheard of, due to the cost, and the process of reviewing different versions was limited to one at a time. Trying out alternative ideas required changing the cut physically to view it, before changing it back to the old version. By moving the editing process to videotape, which makes copying fast and cheap, it became possible to generate different cuts much more quickly and even have them exist simultaneously for review. Watching edits on television monitors allowed directors, producers, studio executives and stars to be much more involved in the process – instead of setting up screenings, it was now possible to messenger a tape across town and have someone review a cut in their office or invite people into the editing room to watch a cut being screened on a television monitor. Today is it even easier – I was just reviewing two different versions of the Space Station 76 cut online, which had both been uploaded to a private Vimeo account by our director Jack Plotnick in order to get feedback before the picture lock. I could review these without ever getting up from my desk to go to the mailbox or have to insert a videotape or DVD into a playback machine…

Videotape was, of course, only the first step towards a process, which soon became digital and non-linear. First by using multiple redundant playback sources (EditDroid and Montage were two such systems based on multiple redundant copies of the dailies on Laserdisc and Beta tapes respectively that created a sort of fake random access) and later by using hard drives and RAM of computers, the editing process improved in efficiency so much, that soon the entire industry had switched. Cuts didn’t exist anymore as hard copies, even on videotape, but only as part of a constantly changing and changeable edit decision list. All the while, of course, films were still being shot on film and even screened on film to larger test audiences – only the editors were looking at video transfers, but work prints were still being produced and conformed to the video cuts. The digital revolution in the post-production area also included the introduction of computer generated visual effects, which at first were still being filmed out for a physical print finish and later became DIs (digital intermediates), where the entire post-production was done in the digital realm, and then filmed out to make release prints (hence: digital intermediate). It was not until the wide acceptance of HD and high quality digital projection systems, and fairly recently followed by 2K and 4K workflows and projectors, that it became possible to stay digital throughout the entire post-production and exhibition of a movie. But while post-production digital technology had advanced and matured, production was still mainly film based. Yes, there were early adaptors of HDTV technology, who shot films with cameras we would now consider for nothing more than home movies, while telling us that digital had definitely arrived, but, outside of those trying to talk us and themselves into it, most filmmakers knew that the quality was simply not there – not for the big screen – and continued to shoot film. This digital post-production revolution happened roughly in the period between 1980 and 2000 and is by now essentially complete. I’m not aware of many films over the last few years, which still finished on film.

I’ll digress here again to propagate Garrett’s suggestion that we change the nomenclature for digital post-production: Instead of using the term DI (Digital Intermediate), which does imply that there is a film finish and the digital stage is only intermediate, we should agree to call it a DF (Digital Finish) instead, as that is really what we are doing at this point. So, if you agree, start calling it DF instead of DI and spread the word.

The digital revolution in the acquisition of motion pictures, i.e. camera technology, started in earnest with the new millennium. Though the manufacturers of the equipment were trying to make us believe, that digital cinema had arrived as soon as the first HDTV cameras existed in the 1990’s, the images being produced by those cameras did not satisfy the people who cared more about the movies than pushing the technology, and acceptance of this technology was slow. Early adapters, like Robert Rodriguez and George Lucas, who started using Sony’s F900 HD camera, brought digital cinematography into the main stream in the early 2000’s and started a decade of upheaval, which has led us by now to a predominantly digital model. It helped, that HD quality was sufficient for television and with the SAG strike of 2009 many networks were pushed to switch to digital cameras, so they could continue to produce under the AFTRA contract. This strike started a wider transition on the TV side, which continued into films with the arrival of 2K, 3K and 4K cameras around the end of the decade. Panavision’s Genesis, Arriflex’s D20, D21 and now Alexa, Sony’s F23, F35, and now F65 and F55 all contributed to setting the bar higher and higher by using better sensors with more pixels and less compression. Where the evolution of camera technology was once measured in decades and years, it became years and months, and now, at the beginning of the second decade in the 21st Century, we are dealing with months and weeks. Garrett’s wonderful chart, which doesn’t even include the many semi-professional cameras being used in TV and film production (we had several Canon 7Ds on our camera truck for Fox TV’s The Good Guys and the producers were delighted every time we set them up as additional cameras for stunts), shows how much the pace of technological development has increased and how quickly these cameras are accepted into the production process.

graphic2

When I started shooting films for Hollywood studios, even choosing Arriflex over Panavision was something that had to be explained, because the studios were used to dealing with Panavision and slightly suspicious of this German company that was mainly used in Europe. Arriflex countered this by advertising widely whenever a film shot on Arriflex cameras won an award or an Oscar. (Last year, 7 out of 9 Best Picture nominees were shot using Arri cameras, so they don’t have that problem any more!) Acceptance of new technology by cinematographers used to be slow and by studios and producers even slower. I remember having to make a presentation to Sony about Super 35 technology, which I wanted to use to shoot wide screen on The Cable Guy, even after Ben Stiller and our producers were on board with the idea. I honestly think, that if I hadn’t been able to point to James Cameron, who helped pioneer the technology with films like Terminator and Aliens, Columbia would not have allowed their movie to be shot using the Super 35 wide screen format.

That was then – this is now. Today I encounter almost the opposite problem. Frequently, producers are excited about a new technology and eager to move ahead, even before all the consequences of using it are clear. In the film-based model, which existed for a very long time, everything had become standardized. It didn’t matter where in the world you filmed, you exposed the film stock of one of three companies – Kodak, Fuji or Agfa – and, if you weren’t using one of the two biggest laboratories in the world – Technicolor or Deluxe – whatever lab you were using followed strict guidelines developed to process the film. Everybody knew exactly where you would end up – on a 35mm projector in a cinema projecting a release print at 16 foot-lamberts – and the path to get there was clear. Shooting a digital format in 2013 presents an entirely different picture. There are literally dozens of different RAW file formats (if one chooses to record the raw sensor data and not process the information in camera) connected to different cameras and manufacturers. These files may be recorded on an equally great variety of different media. Reading the media, de-Bayering the image to produce a video signal that can be watched on a monitor, trans-coding to other file formats and which ones of those to use for editing and VFX – none of these steps are standardized and require different hardware and software for each camera. Not every post house may be comfortable with or even be able to use every kind of file, and workflows vary not only from post house to post house but from project to project. I now see it as a big part of my job to get everybody involved in the workflow of a project together at one big conference table and discuss exactly how the workflow will work before the start of any film. The best post houses are not only equipped to do this and bring in everyone involved on their end for an in depth discussion, they understand the necessity and prefer this to the misunderstandings that can happen when these issues are not resolved in advance. It is frequently the producers and directors now, who have to be convinced that it is a wise use of their time to sit down and have a very “boring” technical discussion involving everyone who has anything to do with the post-production workflow on the production and on the vendor side. It seems there has been a profound change in how studios and producers feel towards digital technology and technology in general. There are now so many different cameras by so many different manufacturers, that it is a full time job keeping track of what all is out there and what it is suited for. Yet more often than not, I will have clients asking me about a brand new camera they have heard about or pushing for one, even before it is on the market. Everyone is excited about technology these days and where caution used to rule before, it is now a race to adopt new ones.

I think the smartest directors and cinematographers have always used the appropriate technology for the task at hand. When Michael Mann directed Collateral, he used the Viper for most of the film. At the time, that was the highest quality digital camera to shoot night exteriors and be able to expose the downtown skyline at night. On film, it simply wouldn’t have been as bright and close to what the human eye can see. For the insert car shots of the driving cab, however, he switched to the more compact and robust F900, and, when it came to shooting the shoot out in the nightclub he didn’t hesitate to switch to film, which was still the best way to shoot slow motion. More recently, Gore Verbinski shot day exteriors on The Lone Ranger on film with Panavision cameras and anamorphic lenses, but switched to the Alexa for night exteriors to take advantage of the strengths of each medium. Both of those examples are fairly high budget movies, however, which do get a choice of technologies – many smaller productions can’t afford to carry multiple camera packages for film and digital and usually end up shooting digitally these days. Though the revolution is ongoing, one thing is clear: Digital has won. Kodak will stop manufacturing motion picture film within a couple of years and the major laboratories are scrambling to change their business model to one not including film processing.

At this point, the struggle is not between film and digital but between different digital formats and technologies. File based has won over tape. Cameras systems fight with each other using the variables of RAW vs. video, more compression vs. less compression and cost. It is foreseeable, that new cameras will continue to enter the marketplace at an alarming rate, vying for market share. (I say alarming, because the rapid pace at which Sony puts out new products makes it hard for vendors to amortize their equipment before they have to invest in new cameras.) Efforts to standardize color space and file formats are continuing to make different cameras work within a common framework of post-production, notably ACES, the Academy Color Encoding Specification. Hopefully, if this idea takes hold, every camera manufacturer will allow their proprietary sensor information to flow into a common and standardized color space, large enough to retain all original information without loss. This will then allow software and hardware developers to come up with global solutions that work for any camera or capture device and in turn make it simpler to shoot with new cameras as long as they can be used within the ACES standard. We are currently in the middle of a transformation, which took the post-production side a couple of decades to work out – and even now it is being affected by the changes in production. With a little luck, the technology will eventually settle on certain standards: 4K projection looks pretty amazing and there may be little need or incentive to move beyond it in regular theaters – that could mean that cameras don’t have to improve beyond 8K to reach the capabilities of the projection – but whatever the number of pixels and the least amount of compression, there should be an end to the race and with it a maturing of the technology eventually.

1 Comment

  1. Neil Smith
    April 28, 2013

    Great article, Robert … had the pleasure of discussing many of the issues you’ve raised with Garrett after he returned from the HPA presentation in January … you’ve done a terrific job in summarizing the main points.

    The only thing I’d add is that I don’t think “the race” will slow down … I was a senior manager in the computer industry for 25 years before we I set up our post house in 2005 … the one thing I learned at companies like Digital Equipment Corp and Microsoft is never to bet against Moore’s Law (the doubling of the number of ICs you can cram onto a VLSI wafer every 18 months coupled with the halving of price) …. that price/performance curve has driven the tremendous gains in computer power we’ve seen in IT over the last 40 years.

    Hollywood was late in coming to the digital party … film was an analog media and didn’t conform to Moore’s Law, the same with video tape … however, now that Hollywood has embraced the digital revolution and become a Moore’s Law driven industry, I think we’ll continue to see the rate of innovation in digital acquisition and file-based workflow continue to increased exponentially … my old boss, a certain Bill, used to observe that Moore’s Law is like “drinking from a fire hose” … it never stops spewing forth a continual stream of new technology products.

    Yesterday, we ran our first 4K MADE EASY training day …. based around EPIC and Sony 4K workflows, we demoed how you can now edit, color and finish a 4K project on a $299 piece of software from Apple (FCP X) running on an $3000 Apple retina display Mac Book Pro … of course, we also had some high-end Macs running on super-fast PCIe express based SAN delivering 4K realtime performance for multiple users at a fraction of the cost it used to cost even a couple of years ago.

    The highlight of the training session was a 50 inch 4K TV from a Chinese company called SEIKI … the TV (after it was professionally calibrated) makes for a great client monitor for DPs, Directors and VFX supers wanting to see native 4K resolution … the cost of the SEIKI 4K panel is $1500 now … by the time it hits Walmart in the Fall it will be a lot less .. the company also has a 65 inch 4K monitor coming out in the next couple of month which will help drive adoption in the consumer market.

    At NAB, Blackmagic Design announced their 4K production camera for $4k … Blackmagic understands Moore’s Law and leverages it in their product design and marketing strategy … thousands of cinematographers around the world will soon be shooting 4K RAW images on $4k camera, cutting it on a $300 dollar piece of software from Apple and grading it on a DaVinci Resolve (now costs $1000 compared to $1 million when you and Garret were doing telecine in the good old days) while looking at Ultra High definition images on sub $1000 4K monitor.

    The speed of change in digital acquisition and file-based workflow will continue unabated … how we master that perpetual ‘fire hose’ and ride the continual wave of digital innovation is something that will continue to challenge us for years to come.

    The good thing about Moore’s Law is that it doesn’t apply to talent …. that seems to be controlled by some obscure Laws of Nature, hard work and determination …. but keeping that talent honed and relevant for an ever changing digital ecosystem is certainly going to be a challenge for all of us.

    Neil Smith
    CEO
    LumaForge and Hollywood DI

    Reply

Leave a Reply