EDU010 SOUND EDITING FOR SCALE

The next subject I’d like to discuss with regards to sound editing is the idea of changing scale. This can be simply split into two categories ie making a sonic event seem larger, or making it seem smaller. When someone has no experience with actual sound editing they might easily mistake both of these needs as being solved by plugins. But that is often not the case. Keep in mind we are discussing sound editing, rather than mixing. It is also about more than simply playing a sound LOUDER or quieter.

What not to do:
I saw a reddit thread ages ago where someone was complaining about how their sounds seemed noisy, like they had a high noise floor despite being clean recordings. After a few questions it became apparent that by default they were using a limiter to crush the peaks in an attempt to make the sound seem bigger. If this person was slamming a sound with say 20dB of limiting, then what they are effectively doing is turning up the gain of every part of the sound by 20dB, while squashing the peaks that would otherwise go over 0dB. So what they have actually done is turned up the noise floor by 20dB, and that is a lot! All microphones have some self noise, and even the quietest, most expensive recording studio has some residual noise. So a noise floor that was not noticeable beforehand is now apparent, due to amplifying it 20dB!! But by crushing the dynamic they have also made it more difficult to layer sounds.

In my early days of using ProTools, I would sometimes wonder why some of my work sounded great in my studio but when I took it to the dub stage for predubs, it would seem less ‘present’. What I came to realise is that in my studio I was layering elements without paying attention to maximum levels. So in my session I had sets of tracks going to buses, which were then summed to a mix bus and out to my monitors. Layering lots of loud sounds (at say the climax of a busy scene) meant that while all my individual tracks with my sound editing on them were not clipping, the bus was. (Keep in mind this is long before it was possible to run plugins on every track or bus) But this meant the bus was acting as a brick-wall limiter.
Now a dub stage is usually set up with a very expensive high-quality desk with a LOT of headroom. So when I arrived to the predub, I would delete all my busing, and switch to direct outs so that the rerecording mixer could mix my individual tracks. And now I would hear my sounds as a composite, but with no clipping or limiting at all. So I was fooling myself that everything is louder due to limiting, but when in the final context of a film mix that may not be applicable at all as my sounds had to work with dialogue DX and music MX. Preserving the dynamic range of your sounds, so they can be expressed clearly in a mix environment with lots of headroom (and where appropriate lots of volume) is very important.

Why I share this experience is that it is very important to consider what is the final destination of your work. Sound editing is not mixing. Yes you are monitoring your sound edits, but a rerecording mixer has a very different set up to what a sound editor has. Your sound editing studio should be set to a calibrated loudness which is correct for your small studio/room. A dub stage is cinema size and has an entirely different calibration.

Even the difference between mixing for TV/streaming versus cinema is important. In the book by Walter Murch Behind the Seen he mentions how he has ‘little people’ stuck to his monitor, to remind himself how big a cinema screen is.

This same idea applies to audio, for example imagine you pan a sound to follow onscreen action from the right side of the screen to the left. In a small edit studio that sound travels across your screen which might only be 2 foot across, whereas the pan across a cinema screen might be 40 foot. That added resolution is only apparent when you are in that environment.

OK
So we want to scale a sound moment. We want to make it seem bigger or make it seem smaller.
Lets start with the seemingly easier scenario…

SOUND EDITING TO MAKE A SOUND EVENT SMALLER
Why do I keep calling it a sound event, and not just a sound?

One of the basic truths of sound editing is that it is very rare for a sound event to be a single sound. Again this is something I realised very early on. When I first started working as a sound effects editor, I would edit and prepare my tracks and then hand them off to my boss who was the mixer. As an example, if there was a moment where someone gets hit over the head with a bottle, I want to achieve two things with my sound editing.
First I want to create something that ‘feels’ real. It’s not a documentary about a bottle hitting someone’s head, its a drama and it needs to feel real and convince the audience it is real so that they don’t notice it’s a prop bottle made of plastic. I want to create sound such that the audience flinch & almost feel the pain. But I also have a second motive. I want to impress my boss, and I want him to have fun mixing my tracks.

So again, lets look at what not to do. I could do a search of my sound FX library for ‘bottle head smash’ and maybe some sound appears in the list that at first seems perfect. If I grab that sound, sync it up & hand it off to my boss, what can he do with it? He can turn it up or down, he can EQ it or control its dynamics & he can pan it and/or verb. But that’s all pretty basic & not very creative…. We want to enable more creative work than using pre-designed sound effects!

Here is what I did. I thought about what is actually happening with that bottle smash. First of all, someone is being very physical to move their arm fast enough to raise & swing that bottle, and bring it down on someone’s head. So I am thinking about what that person is wearing, and how fast they are moving. Let’s say they are wearing a leather jacket and they move fast. I’m going to use a couple of tracks to layer some sudden leather jacket moves, and I’m likely going to use a swish of some kind. So that’s our first two tracks or layers.
Next, what happens at the point of impact? For a brief moment, a very solid object (the bottle) hits a fairly solid object (the head) So before the bottle actually shatters, there is a nasty dead thump impact. Depending where on their head it impacts, there will be some cartilage, jaw, skull movement and/or breakage. So tracks 3,4,5 have the first impact & the physical head reaction.
A brief moment later the glass breaks. I might have to hunt through dozens of glass breaks to find one that ‘feels’ right. I might end up needing a bottle smash and a wine glass break or something else to really make that bottle onscreen shatter. So that’s tracks 6 and 7.
Now what happens? The glass shatters & glass debris flies. So I want some elements of glass fragments hitting things. And again I’ll likely need to layer 2 or 3 source tracks to build up enough glass fragments to feel right, especially to make elements to scatter left & right. Tracks 8,9,10. Is the person now bleeding? Do we need some blood splatters as well?

So as a sound editor, I source all of these elements and sounds and I layer them & carefully sync them to picture. And I keep working on them. I cut that swish & leather jacket movement off at the point of impact. And I use ProTools volume automation to shape the sounds and get a balance between the elements, such that if I play the tracks down it ‘feels’ like a single event. But it has shape and it has character. And guess what? My boss is going to have a ball mixing that moment!

Now that might all seem like the layering is to make the sound moment bigger. And in that specific case, perhaps it is. But layering is still used for quiet smaller moments, for the same reason. We want to add character and we want enough elements to be able to shape them into something interesting.

But what techniques are available to us to make a sound appear smaller?
Somewhere I read a general rule about pitch shifting, in that we more easily accept a sound as real if it has been pitched down or slowed down. But a sound pitched up often does not seem real, like our brain does not let it go past without thinking what’s up with that sound?? So while slowing sounds and/or pitching them down to make them larger is a valuable technique, the opposite is often not worth pursuing. So what are our options?

As always the right source material goes a long way. On a feature film, many of the smaller, quieter sounds are provided by the foley team. Along with footsteps they also perform ‘spot effects’ & clothing rustles & movement. And as a supervising sound editor, I would always talk with the foley team and especially discuss the moments where sound editing and foley overlap.

But when I am wanting to edit sounds for a smaller moment, it does help to have such sounds in your library. It’s why a good sound effects library does not only cover the big hero sounds but also provides smaller quieter variations. Always remember: if you are recording a prop or performed sound, capture gentle quieter variations too!

So the first port of call is to find sounds of appropriate scale, and then we can still layer them as required.

Another useful technique is to edit the loudest part out of a sound. Let’s say someone drops a metal box on a wood floor. I do a search but I don’t have a recording of that action, but I do have a recording of someone dropping a 5 ton metal block which seems too heavy but it’s the right action. So I sync it up roughly and I zoom in until I can identify each part of the impact, and I then select the loud parts & delete them! So track 1 is the original file, track 2 I’ve selected some parts: first the loud part that I want to delete and second, a piece of the end action which feels smaller in scale. Track 3 I’ve deleted the loud part and used fades to smooth the cuts. And that later piece I’ve edited & added fades so it becomes a self-contained smaller version, which might be a solution too.

Also note: every edit has fades. EVERY EDIT. Fade in and fade out. In your small studio you may not notice a glitch on a cut, but on a dub stage if a rerecording mixer solos a track and it has a glitch on a cut then you should be ashamed! Using the fade in, we can shape the attack of a sound, and a fade out we can shape the decay. But any cut has the potential to glitch if does not have a fade. Accordingly it is essential to know all the keyboard shortcuts for fades.


A handy ProTools technique is using nudge within a region. Let’s say you import a long sound file and edit it down to one section. With that region selected, if you use CONTROL + or – (numeric keypad) the audio within the region moves, by the nudge amount. This means if you have cut a sound in sync and added fades, you can still move through the source audio file and listen to alternative sections of it:

ProTools Editing Shortcuts

To learn such techniques you will need to read & memorise the keyboard shortcuts, and when I say memorise you need to use them enough that it becomes muscle memory. The Pro Tools Shortcuts PDF is an essential document! This is the relevant page/s for editing shortcuts:

 

Using only those keyboard shortcuts you can sync and edit a sound, without touching your mouse. You can nudge sync, you can trim the front or end of the region. You can move audio within your region. You can add fades to the start & end. You can scrub and varispeed play the audio. And even while varispeed playing you can select parts. Using a mouse is generally the slowest way to do anything. You need to learn keyboard shortcuts and in a future tutorial I’ll explain potential use for macro apps like Keyboard Maestro.

 

Another method for scaling a sound down is a slightly specialised one, and that is to use convolution with an Impulse Response. As an example, on the film BOOGEYMAN (2008) there is a scene with a plasma ball toy in a kid’s bedroom. There’s a shot where it’s close up onscreen, so I knew I needed enough elements to make it interesting when in foreground focus like that. But it then also needed to sit in the room as a single spot element.
I loaded up lots of arc welder sounds and synced the arcs to the onscreen plasma arcs and it sounded great & powerful and all, but it did not feel like a plasma ball, which contains the arcing. So I processed my plasma arcs through a glass IR which made them feel smaller & as though we were hearing them through glass. And suddenly it felt real! I split my tracks so when the plasma ball was close up onscreen we can pan the arcs across the screen, but then when it was in the back of shot all of the arcing was contained and could be panned to the onscreen location onscreen.

 

SOUND EDITING TO MAKE A SOUND EVENT SEEM BIGGER!

As with previous techniques, having the right source material makes a big difference. So if a monster stomps his foot & the earth shakes, then we can search our library for big heavy sounds. Again we are not looking for a single solution, we want elements that we can layer. So that footstomp we know we want a big heavy thud, but this monster is so big the earth shakes and the earth is made of dirt & rocks & gravel. So we search our library for some dirt & heavy rock movement. It still might not feel big enough, so we try adding an explosion underneath it, but balanced so you don’t hear it as an explosion. And maybe we grab some bassy thunder and we layer a short piece of that.

Now when we layer, we need to be aware of levels & dynamics but we also need to think about frequencies. If we layer a lot of bassy sounds all in the same frequency range then its going to get muddy and lose definition. So as we search for source material we want to look for elements with different harmonic content. That rock movement will be crunchy and cut through so it’s a good element to layer with bassy sounds. But one of the creative joys of sound editing is to use elements that have nothing to do with the onscreen action. So maybe I remember a recording I made of a tree being cut down, or a wardrobe falling over or something. Sneaking interesting characterful sounds into our layers can make all the difference.

Sync is obviously an important aspect to consider, with larger composite action. If there is a clear single impact then once we have layered some impact elements we need to zoom in and check the actual sync of the impact point is tightly aligned. Masking can be an issue, so instead of turning up an element to hear it more, we may in fact need to turn some other element down at that point. Using markers are per the previous tutorial, we can see where there may be multiple impacts or sync points. And when it comes to eg the decay of an explosion or complex action, it might be more important to randomise or not sycnronize elements to make it feel more chaotic. Context matters.

As we build up layers, we will balance their levels with volume automation and I will never forget the day I really appreciated what is possible with nothing more than great sounds & volume automation.
Anecdote time: One of the first big breaks I got as a young sound editor was working on Peter Jackson’s first US studio film THE FRIGHTENERS (1997). My dear friend & mentor Mike Hopkins (RIP) was the supervising sound editor on the project, but he was held up on another project and needed someone to start work to cover him for six weeks. He first asked my boss who wasn’t keen as he had a young family & lived in Auckland. So my boss asked if I would be keen? OMG Hell yes!!! So I moved down to Wellington for two months & got to work on previz and early cuts of scenes of the film.
But the really amazing part was that the US Studio that funded the film insisted Peter Jackson use a US sound designer, since while NZ had a great indie film industry no one had ever made a “Hollywood” film here before then. As luck would have it my six week stint overlapped by a week with the US sound designer, who turned out to be none other than Randy Thom and his brilliant assistant Phil Benson!! As a young sound editor I was very shy, so when Randy arrived I stayed in my room working… But pretty soon I started hearing the most incredible sounds coming through the wall, as my little studio room was next door to his. Literally the wall would start shaking… After a while, there’s a knock on my door & Randy comes to say hi, and we have a chat & then I go with him to see what scene he’s working on. Of course it sounded even more amazing in his room, but I was at a loss as to how he was making these sounds work. I expected to see a big ProTools session with a million edits in it. Back then we were using ProTools 3 which maxed out at 16 tracks. On his tracks were long chunks of sounds with no apparent edits, so I was stumped as to what I was hearing & then a light bulb went off in my brain! He switched to volume automation mode and there it was! He was using intricate hand drawn volume graphs as envelopes for the sounds. As an example, in the film there is character called Wallpaper man who travels within walls & in one scene comes down a wall in a bathroom and plunges his hand into a man’s chest & causes a heart attack. Randy’s sounds for this scene were phenomenal, but when he switched to volume graphs here was all his brilliant source material sculpted to picture using volume graphs! It was truly an AHAR moment… Of course his choice of sounds, his aesthetic & depth of knowledge of storytelling are all vital parts of his profound skills. But seeing a track of eg stampeding horses, which is only actually automated up & back down in volume, shaped to a moment of action osncreen, was genius!

So this is another technique for scale ie using volume graphs to shape elements to fit the dynamic of the action onscreen. I don’t mean this so much as mixing because we don’t yet know how it will be used in the final mix. But more as a way of building up layers and shaping them.

Now two other anecdotes, which led to techniques for making sounds seem bigger.
Also early on, I worked on a short film THINKING ABOUT SLEEP in which one of the characters jumps off a bridge. We see him jump & we see him land & fall over, all in slow motion. I started loading & syncing sounds but nothing felt right against the slow-motion pictures, so I started slowing down my sounds. Now a skill that you slowly acquire is to see the potential in a sound, and when it comes to slowing sounds down it is often useful to have a resonant source sound but one where the resonance is not already bassy. So I started searching for elements that had the potential to be bassy once they were slowed down by an octave or two.

Now it’s important to appreciate the difference between pitch shift & slowing sounds down. It’s possible with plugins to pitch shift a sound while keeping it the same duration, but to my ears that often sounds worse than slowing it down, as half of the information is being discarded or at least perceptually compressed. Slowing a sound down means that both the pitch is lowered and the duration becomes longer. So at half speed, a sound is one octave lower in pitch AND twice as long.

For the scene in the short film, I took body falls on dirt and grass and I slowed them down to half speed. And I found some movement sounds which I also slowed down… Then I tried slowing down some pieces of wind and adding them as layers, and with some shaping of my volume graphs I slowly got the scene to feel ‘right’ even though it was slow motion. So that was my first practical experience of slowing sounds down, and I noticed how it made sounds feel larger and heavier, like gravity was slowed down and momentum increased.

Slowing sounds down is no new technique. Since the very beginning of film sound, sound editors have been using similar techniques by recording on a Nagra at 15ips and then replaying it at 7.5ips (half speed) or 3.75 ips (quarter speed). I have some recordings in my library that I was given, of dynamite explosions recorded to Nagra, and then transferred at real speed and half speed. And wow those sounds really kick!!

 

Another technique to make sounds bigger is the use of subharmonic synth processing. I discovered this technique via a funny means, as though it is an industry secret. After working on The Frighteners, I got my first feature to do as the HOD sound designer. I was hired as a freelancer to design sound for the film SAVING GRACE (1998) and the studio I worked in was owned by a guy who was more of a music engineer & composer. So his studio had lots of nice analog gear and my room had a PT3 rig. One day he came wandering in to show me a big of gear he had just bought. It was an odd looking unit labelled DBX 110 and it had something about a synth in the title. I was intrigued, as I am of course also a synth nerd. But when he plugged it in & showed me what it did, I instantly realised I was hearing a sound or tonality that I had heard many times before. It was a subharmonic synth and it reminded me of the subby doof doof from every night club I had ever been in! We played around with it & I put some sound effects through it & thought wow!!
When we came to start predubs for the film I mentioned it to the rerecording engineer, Mike Hedges, and he laughed spun his chair around & pointed at their rack, and there was a newer model of the same bit of outboard gear! Right I thought to myself, as soon as this job is over I am buying one! So I ordered a DBX 120XP Subharmonic Synth and OMG I loved it and have used it ever since.


Bear with me but I’ll first explain what it is doing. As an analogue process, it takes whatever sound it is fed and it pitches it down an octave, and it generates a new synthesised bass sound that is directly related to the input via an envelope follower. So it doesn’t slow down the sound, it stays in sync and you get an octave lower bass sound. But there is also a knob to control subharmonics. If you dial it up, you start to hear more harmonic layers of sub, and they all relate to the fundamental an octave below. As a double bass player I noticed that if you used too much subharmonics, it started to sound like someone playing chords on the double bass, or layering a fifth above a bass note. Sometimes this made it worse – too muddy, or too complex. But on a dub stage with big subwoofers, it could make your body shake!!

So I used my subharmonic synth on projects and it became an invaluable technique for making sounds seem much, much larger. But I had a very funny experience when again my friend Mike Hopkins was preparing to start Lord of the Rings. A US sound designer Dave Farmer was coming over to work on the Rings trilogy and Dave had sent a request list of gear. Hoppy had Daves ProTools rig and plugins etc all sorted, but there was one item he had no idea what it was. Can you guess what the request was? It was of course a DBX Subharmonic Synth! I laughed & pointed at my rack…

Now a subharmonic synth is not only for making huge sound effects. When I was working on the film BLACK SHEEP part of my work was to design sounds for an 8 foot tall monster were-sheep. The guys working on King Kong had developed a technique for creature vocals called the Kongilizer, which basically used a Sennheiser MKH80X0 mic recording at 192kHz with a real-time pitch shift plugin, which was fed to the performers headphones. So the performer hears themself pitched down an octave and it affects how they perform – when they roar, they hear a big deep roar in their headphones. And as they alter their performance, they hear the pitch-shifted result. So we took the same approach and we had a voice actor perform roars and breathing, and then I took those recordings to my studio and I processed them through my DBX subharmonic synth. Here are some examples:

First is the pitch-shifted monster breathing, and then enhanced with a DBX120XP subharmonic synth:

When processing elements like this it is very important to print sounds and then use them as layers, rather than try & balance a sound & create a final mixed version. The reason why this is important is due to the fact we do not know (a) how it will play on the dub stage and (b) how it will play in context. So my session would have the original recording on one track, the pitched-down version on another track, and the subharmonic synth version on another track. I would then mute the original and balance the two main element using volume graphs. This way the rerecording mixer can put his faders ‘flat’ and hear my rough balance, but they can also access the elements, rebalance them, and treat them.

The first plugin that was released which emulated the DBX Subharmonic Synth was the LowEnder plugin and it is one of the few plugins I would say is an essential purchase. It’s also important to differentiate it from some plugins which try to make heavy bass work on small speakers. That is not what the DBX or LowEnder does. They both generate new sub-bass, as an element in sync with whatever is fed to it (although do check sync, as there is a small processing lag which may or may not matter)

Another anecdote about context. Again during the final mix of BLACK SHEEP there is a moment in the film where the monster were-sheep is attacking someone in their kitchen, and while trying to defend themselves, one of the humans throws a haggis at the were-sheep. Now a haggis is a Scottish dish and it looked a bit like a large ball. So when it was thrown, it hits the were-sheep and then falls to the floor of the kitchen. The director wanted it to sound heavy, so I had layered elements & playing in my predub it sounded great. But in the final mix it seemed like it wasn’t even there. We solo’d it & it was playing but it was not rating… The problem? The entire NZ Symphony Orchestra performed the score for the film and at that moment the score was driving the drama with the full orchestra playing aggressively. My bassy thumps of the haggis did not stand a chance. But the director wanted to hear it, so I started cutting some fixes. (During a mix we always have some empty fix tracks being fed to the mix, for exactly this reason) The end result? To get the haggis to rate in that scene I ended up layering two or three explosions!! It seemed bonkers when solo’d but it was the only way I could get it to cut through that orchestra.

So this is a reminder that you can have the perfect material cut for a moment, the director is happy, it sounds great in the predubs and then the score comes along and the final context changes everything! From that experience I learned that it pays to have some fixes already cut, when you forsee a potential issue. Cutting some ‘sweeteners’ and having them muted ‘unless we need them’ means less stress on the day.

Are there other techniques you use?

As you get experience and build up your sound library, you tend to also build up a memory of techniques & the sounds used. So it’s handy to build up a collection of good bassy boom sweeteners. And when recording, its a reminder of why capturing multiple takes is so valuable. That big metal smash sound is useful, but if you have 10 variations of it from recording then if you decide to use that metal smash as say an element in a monster foot stomp, then you can use a different take for each step rather than working to vary the same repeating sound element.

Auditioning sounds slowed down is another invaluable function of a sound library app. So with SoundMiner I can audition a sound and slow it down as I audition it. And if I find the right detuned setting I can have SoundMiner process it that way as it transfers to my ProTools edit session. This function is not unique to SoundMiner, so I expect whatever sound library app you use is capable of it. In the screenshot I am auditioning some wood impacts slowed down by 75%

It can be a fun learning experience to set that pitch slider really low eg 95% and then audition random sounds. The important factor with this is that it is shifting the dominant frequencies & resonance. So eg a high frequency sound that is shrill & nasty to your ears, might seem harmonically beautiful 2 octaves lower.

One last anecdote about showing sounds down. I have worked with kiwi film director Gaylene Preston many times, and the last time was in on a documentary called HOME BY CHRISTMAS which is a documentary about her fathers generation, who went to WWII and then returned to NZ somewhat traumatised by their experiences. There were archival WWII shots cut into sequences and I began looking for sounds that were haunting and one day I was playing with some hi rez fireworks recordings and when I slowed them down I had one of those AHAR moments! Have a listen:

 

Notice how those shrill whizzes become almost vocal in tone once they are slowed down. And how the explosions & the echos/reflections also gain so much weight & power.
When searching for sounds for a moment or sequence, it can be helpful to think laterally and search for related physical ideas. Another example: when working on WORLDS FASTEST INDIAN we had recorded fast passbys with the Indian V Twin motorbike, but when it sets a land speed record on the Salt Lakes in Utah, there is a final passby that is so fast it really needed to feel dangerous. After playing with lots of elements I suddenly had a realisation: from the audience perspective that bike passby is like being close to a gun shot. So I searched for gun shots which had a strong natural echo or slap delay i.e. they were recorded in a canyon or near cliffs. As soon as I added that sound to the passby, it felt really dangerous. As an audience we do not perceive that someone has fired a gun, but we do perceive that composite sound is moving very, very fast such that it creates the slap echo retort of a bullet.

2 thoughts on “EDU010 SOUND EDITING FOR SCALE

  1. Andrew Richards says:

    Another incredible read.

    To your point about pitch shifting vs varispeed, I’ve found Melda’s free MFreqShift to be an outstanding way to lower the frequency content of a sound whilst retaining the character. I’ve tested other frequency shifters and found it quite unique to MFreqShift with the way it retains the quality of the source. Most room resonances/reverbs seems to be most prominent in the 500hz range or thereabouts. Shifting down aggressively with MFreqShift pulls those resonances down below the range of human hearing. The result is something like the original sound devoid of any space. I’ve found it particularly useful in animation and the like – example can be viewed here: https://www.instagram.com/p/CTJ-N4oBxg7/

    Am I right in saying you run Lowender primarily as an audio suite or in your processing chain? Do you ever run it as an LFE send?

    • Tim Prebble says:

      I have LowEnder on a bus routed to a track for printing. So I can run it as a send but specifically for printing. Certainly on the dub stage they run the DBX units as a send, routed to the FX stem.

      Hey thats fascinating re Freq Shifter, thanks I’ll check it out. I’ve played with a few but never found them super useful but what you describe sounds interesting!

Leave a Reply