Niall Anderson looks at the loudness war in cinema

A famous technical innovator, Stanley Kubrick was defiantly behind the curve in one regard. He always mixed the sound of his films in mono. His reasoning was simple: you can’t predict the sound environment of a movie theatre. You can’t predict the size of the room, the output or placement of the speakers, the number of channels in a theatre’s mixer or the competence of the projectionist. A complicated stereo mix would leave too much to chance. So just make sure the dialogue is audible, mix the whole shebang in mono, and get the film out there. It will sound pretty much as good in a suburban fleapit as it did on the sound stage.
This may seem like a mere technical consideration, but it has artistic consequences, the major one being that with only one audio track to play with, you have to think carefully about your approach to volume. Working in mono, you can’t have a sudden violent explosion in a viewer’s right ear while the rest of the soundtrack potters on at its usual level. Big surges in volume have to be carefully planned or the entire sound mix will be destabilised. The result being that the loud parts of Stanley Kubrick’s films are rarely objectively loud (in terms of decibels); they’re just loud in relation to the other bits. Mono means that you have to pay equal attention to the quiet stuff.
Of course, it’s entirely possible to achieve this kind of careful audio balance in stereo. Indeed, given the advances in sound recording and mixing afforded by digital technology, it should be easier now than ever. Digital sound recordings have a much wider frequency spectrum than analogue recordings, which means, first, that a larger range of sounds can be captured and reproduced, and, relatedly, that there is now less need to rely on sheer volume to distinguish between – for example – an explosion and dialogue. In addition, the technology gap between what a sound designer hears on the sound stage and what the moviegoer can expect to hear in the theatre has drastically narrowed since Kubrick’s heyday. Even the worst suburban fleapit now has Dolby Digital stereo sound, and even the most careless or high-handed sound designer will have this minimum standard in mind when mixing and mastering the sound for a film. For all these reasons, we should be living through a golden age of sound: crisper, cleaner and more dynamic than before. So why are people leaving IMAX screenings of The Dark Knight Rises, for instance, complaining both that the whole thing is too loud and that the dialogue is inaudible? Surely a purpose-built IMAX theatre is the perfect environment to see it? More generally: why hasn’t the golden age come to pass?

In part, it’s because the technological advances happened by degrees (we didn’t just wake up one morning capable of radically better sound reproduction), but it’s also because of the very human urge to get a new piece of technology and do the most obvious things with it first.
In the early to mid-90s, most film soundtracks were still entirely analogue, but studios were beginning to use additional technology to retouch them. A noise reduction tool like Dolby SR – still very much an industry standard – could be used to provide a better balance between sounds in a stereo mix. Crucially, however, it could also make things louder. The actually audible results of this approach were initially quite slim, at least in purely technical terms: about 3 decibels of extra volume at mid-range frequencies, and about 10 decibels extra at low and high frequencies. But here’s the key point: mid-range frequencies are much punchier to the human ear. A 3 decibel increase in the mid-range will sound more dramatic than an equivalent increase at high and low frequencies. And because Dolby SR was new technology – because it was, to some extent, singing for its supper – sound designers and film directors wanted you to hear the difference. So they broadly ignored the greater dynamic range that Dolby SR gave them, and put all their efforts into the middle frequencies. In other words, they bet the house on red.
This doesn’t mean that all films suddenly became deafening (the peak levels of 1983’s The Right Stuff are actually a good 5 decibels higher than those of 1998’s Saving Private Ryan). But it does mean that the base level – the volume of the very quietest stuff in a film – tended to get higher. This wasn’t a problem on the sound stage, where sound mixers and mastering engineers were working in a more or less ideal acoustical environment, but it caused chaos when the films actually hit the outside world.

The most obvious problem was distortion: a conspicuous fuzziness to the sound in cinema theatres whenever the soundtrack got busy. This wasn’t so much a matter of films being louder (by and large, as we’ve seen, they weren’t). It was more about the balance between quiet and loud getting scrambled. And here your local cineplex has to stand up and take its fair share of the blame.
Cinemas in the 90s – and multiplexes in particular – tended to use sound limiting technology to bring peak signals down. The reasons for this were twofold: cinemas have public order responsibilities with regard to the total volume of sound they emit; and, more applicably, they wanted ensure that what was going on in Screen 1 wasn’t audible to customers in Screen 2. So they could either turn the overall volume down (not a good solution for your average summer blockbuster) or they could use compressors to limit the peaks of a soundtrack to a uniform level. Part of the genius of compressors is that they can simulate loudness without adding volume. At their most basic, compressors kick peak sounds into the mid-range where – as discussed above – the sounds will retain their punch. But what happens if you apply this extra mid-range kick to a film soundtrack that’s already optimised for middle frequencies? Well, you get overload: you get fuzzy dialogue, overinsistent background noise and random signal drop-outs at the high end as the compressor struggles to balance all of this in real time.
We have moved on from this situation, but not by much. Cinemas have moved towards speakers with greater headroom, which has lessened the use of aggressive compression. This has seen off most of the distortion that blighted 90s blockbusters. At the production end, various technical standards and associated technology have been released [PDF] that aim to control for the most commonly ‘aggravating’ middle frequencies. This has had more effect on the volume of trailers than it has on films, but it does mean that the reference volume for all parts of a feature presentation will be more or less balanced. And this last fact suggests that there’s a single particularly important regard in which we haven’t moved on from the 90s: the long-promised loudness war never really escalated.

The biggest reason for this is simple physics. Sitting in my Hollywood sound stage, using the most sophisticated digital technology known to man, I can now theoretically boost the volume of any sound – at any frequency – to 115 decibels (roughly the volume of a chainsaw at close range). I can do this without producing any distortion in the original signal. And I can send this signal to a set of speakers that I know will cope with that volume, again without producing distortion. But then the sound leaves the speakers and hits a brick wall: literally. There’s hardly a room on earth that could absorb that volume of sound before the room’s own size, shape and contents begin to distort the signal. There almost certainly isn’t a cinema theatre on earth that could. Above 85 decibels in an average theatre room, most sounds will distort beyond intelligibility.
This leaves sound designers with an effective volume range of 30-80 decibels: more than enough to make a great-sounding film, but never enough to actually threaten your hearing. And very few films ever actually get near that upper limit. The impression of loudness is most often just that: an impression. But it’s an impression that somebody on a sound stage has striven very hard to create, so industry wonks (and advertisers in particular) who pooh-pooh viewer complaints about loudness are being disingenuous. They wanted you to at least think it was loud, so they can’t in good conscience act offended that you did.
The problem, then, remains much as it did in the 90s – not volume as such, rather an over-concentration on mid-range sounds: a deliberate drive to get maximum punch from the most obviously audible frequency range. And there is one area at least in which things have got worse: the violent use of post-production dubbing. It’s not uncommon on high-end Hollywood films to now have four or five mix engineers, each working on a different aspect of the film’s sound (music, dialogue, natural sound, special sound effects, etc.). They will be working to agreed specifications, but all of these independent mixes still have to be brought together at some point and mastered to an agreed level. And the tendency here will always be to orient by the loudest sound rather than the quietest, because you can boost a signal much more accurately and with much more effect that you can in reducing it. The result is often a kind of sonic supersaturation – an invariant mid-range blare from which no individual sound can stand out. For all the swish technology being used, the result is not a million miles removed from the bark of bad mono productions in the bad theatres of yesteryear.
In the case of something like The Dark Knight Rises, the sonic mess has been produced by even more tortuous means. Some 40 minutes of TDKR were shot in IMAX. IMAX cameras run at about 90 decibels and sound like a drill: it’s pointless to even try to capture sound while shooting with one. So every single sound effect in those scenes had to be digitally created from scratch or dubbed at a later time. The rest of TDKR was shot in 35mm with more or less natural in-camera sound. If there’s a reason that Tom Hardy is unintelligible for most of the film, I suspect it’s because his dialogue in the 35mm sections had to be boosted to match that of IMAX sections, and that it was not, as Christopher Nolan seems to insist, a deliberate artistic effect.
What cinema’s current sound problems all ultimately come down to is that hoary old catch-all: a collective failure of the imagination. Hang around sound engineering forums on the internet for thirty seconds and some mix engineer will manifest themselves to tell you darkly that somebody in a suit is making them master film sound to conspicuously hot levels. It’s a top-down conspiracy that the techies are powerless to resist. The suits, meanwhile, will tell you that volume is what the kids want: the Hostel films had to be at least as loud as the Saw films or the audience would have felt short-changed. Whatever the truth of these arguments, one thing unites them: they’re reactive and basically conservative. Somebody broke the sound barrier, so everybody else felt like they had to do it too. I suspect that there is no coming back, if only because a genuinely great-sounding film will tend not to call attention to itself through volume (see The Tree of Life), and even where it does (see Enter The Void), it will simply get called loud. It may be up to us, the audience, to let filmmakers know exactly what kind of loud we like.
Great and informative article. Don’t forget too that sometimes audiences have trouble catching dialogue because they’re hard of hearing and either haven’t realised that yet or are a bit in denial…
PARDON?
This is an amazing article. I found it after Google searching for Dark Knight Rises Loudness. I saw the film in an IMAX theater yesterday and the aural bombardment was quite something – and unfortunately not in a good way.