Author Archives: Tim Prebble

Sport Mode

Yesterday I drove 2 hours over to my favourite exterior recording location – low tide & no wind meant the conditions were perfect for recording my DJI Mavic Pro. The Mavic is a drone physically much smaller than my DJI Phantom 3 Pro, but being a newer generation has longer battery life (27min), better range (7km) and… sport mode!

In normal use the Mavic feels a little bit sluggish, but I recorded 3 batteries worth of moves, passbys, hover etc in normal mode…

And then for the first time I switched to sport mode and holy sh+t! The Mavic suddenly became a LOT more animated.
Sport Mode allows the Mavic to fly a lot faster and also makes the controls far more responsive… While it is designed for racing, it is also perfect for performing (& recording) more aggressive moves!

These new recordings will be a free update to our SD024 DRONE UAV QUADCOPTER Library – release next week

SD Challenge 01 – My Approach

For anyone late to the party, the first HISSandaROAR SD Challenge required creating sound for a short video – two shots of waves – with the restriction that the only source material allowed was a small selection of provided noise samples. Here is a short burst from each of the provided sound files:

This is my approach to it… FWIW

Many years ago I worked with an older re-recording mixer who during an ambience predub one day sardonically commented “ambiences are basically just filtered white noise…” We were joking around, but ever since I have wondered about that comment. On one hand it feels a bit like those people who see a photo shot on film and comment ‘Why bother with film – I could do that in Photoshop!’ without ever realising how misguided their comment is. The issue is not whether you ‘could’ do something, it is whether you actually do it or not. Similarly I could become an astronaut, but it is a meaningless & frankly delusional statement as there is no achievement in hypothesising about what one could do.

But still I wondered: could someone really create realistic ambiences using nothing but filtered white noise?

When I launched this first Sound Design Challenge I actually wondered if it was really achievable. Would all the entries be unrealistic and unpleasant to listen to? Within ten hours of launch the second entry eliminated those fears, with a soundtrack that would be a challenge to distinguish from the real thing in a blind A/B test. And it went on to provide a bench mark that few surpassed. Well done Nicolas Titeux!

I don’t know if you have ever watched those music creation videos by FACTmagazine AGAINST THE CLOCK? Basically a musician is given ten minutes to create a piece of music from scratch… It is an interesting concept, and while this first Sound Design Challenge is fairly relaxed with its timeframe, launching on 19th May and ending on June 1st, I think it is important to remember that regardless of your experience this SONIC TRUISM will always apply:


If you really really wanted to, you could work 12 hours a day for the 14 days of the challenge. There would be a fairly good chance the end result would be very good, but it is totally unrealistic. If someone hired you to edit ambiences for a film based on having seen your finished soundtrack, and you worked at the same pace, you would likely be fired by week 2.
So with each of these challenges I am interested in how much time it took you to complete the work. And to gauge the relative merits of your answer I felt I had to complete the same challenge:
How good could I be in the time allowed?


Before I actually start doing any work I am going to consciously estimate the time involved in this task, so I know if I am running behind schedule or spending too much time on a detail, since the end result is all that matters. Having already watched the video a few times I am pretty sure I can get a first version done in a couple of hours. I would then put it aside, and revisit with fresh ears, refine the elements and then output a mix. I would then revisit it a third time, again with fresh ears, check the existing mix, make some changes & output my final version. Total time estimate = 4 hours.

OK step 1 – download the source material!

Decompress the rar file and first thing I then do is open the video file in Quicktime Player and Get Info on it – what format & frame rate is it?
OK its 720p 23.976fps H264 MP4. As anyone who works in post knows, MP4 is a delivery format NOT a working format (due to it using inter frame compression playing forwards is ok, playing backwards or scrubbing is very slow as it has to reconstruct each frame from previous frames).
I boot up MPEG Streamclip app and convert the video to a codec that uses discrete frames and which works well in ProTools, Avids DNxHD format. The original H264 MP4 was 113MB, in DNxHD its now 1.24GB. I copy it to my SSD and the audiofiles across to my audio work drive, a GTech RAID. Time to set up a ProTools session

At this stage many people would open from a saved template, but I will quickly assemble a simple session. My usual working format is 24 bit 48k, I then import the video and the guide track audio (for the 2 pop) to the start of the session, and then set the sessions timecode start to match the burnt in timecode of the video, also checking my timecode timeline matches the video frame rate of 23.976fps.

The final video soundtrack is only stereo, so I create some stereo stem tracks (AMB-A, AMB-B, FX-A, FX-B, FX-C, FX-D) plus a master stereo MIX track. And then I create some audio tracks to feed each AMB stem: 2 C and 2 LR tracks for each, and source tracks for FX: 4 C and 6 LR for each stem. I quickly name my tracks.

I also create some processing audio tracks, for printing plugins. I quickly do all the bus-ing, and set the stems and master track to input monitoring. Then I import the 18 source audio files and I am ready to start work.

Setup time: 15 minutes

Before I listen to any sounds, or start doing any sound design I watch the video a few times and think about it, analysing it. Ok so there are two shots – they are clearly shot at different times of the day and in different locations. The first is a drone shot taken side on to the direction of the waves, out to sea a bit with rolling waves passing camera. The second is shot from the beach, with waves crashing on to rocks. Given the perspective and point of view of the camera these two shots should sound & feel quite different to each other – the first is almost gentle, while the second is more powerful – the wave fair pounds that rock and a lot of water pours in around the rock. I also notice all the weird back waves – that second shot is going to take a lot of elements and a lot of panning!

I think about how I want the two shots to feel

Shot 1 is quite beautiful and a fairly unique perspective – it feels like late afternoon, almost dusk. It looks quiet peaceful.

Shot 2 feels a little dangerous. If I stood down in front of that rock I would definitely get wet and possibly knocked off my feet.

I want to feel some major contrast between the two scenes…

I also start to think about all the elements I am going to need
– doppler/passby elements for the first shot
– wave breaks
– wave impacts rock, big & smaller
– wave drags across gravel
– wave/white water surges
– wave meets wave

I notice like all good picture editors, this video has been output correctly with a SMPTE leader, with 2POP at and FFOA at

(side note: some people entertained themselves designing sound for the SMPTE leader, maybe fun but for all intents & purposes a waste of time. A SMPTE Leader exists solely to verify sync.)

Next I mark up the video with sync points, as I like to find the hit points once accurately & then I never need to manually find sync again as I have a non-linear grid of markers to work to

So FFOA is start of shot 1 and the start of my ambiences, I drop a marker at Shot 1
I quickly find start of shot 2 and place a marker at Shot 2
I find LFOA and place a marker at LFOA
Now to find the main sync points within the shots.

Shot 1 I drop a marker when the wave is dead centre of its passby at wave pass centre

Shot 2 I drop markers: first evidence of wave impact rock first spray lands on Left first white water floods in L wave impacts small rock on R wave impacts small rock centre reverse wave bounces back from front C reverse wave swamps small rock C reverse wave swamps big rock second wave breaks out on L small waves meet & interact C small waves meet & interact L

Watching the video again in real time I also notice something fly across the screen, and again a few a little later, I drop more markers (not sure these will rate, but noted) insect passby C insect passby C insect passby R insect passby C

OK I have finished spotting the film – I have sync markers for all action and a clear idea of how I want the end result to feel and what I need to create from the source material… I’m 30 minutes in and it is now time to get busy!

I don’t know about you, but I prefer to have all the source material available on tracks. When they are all sitting in the Region Bin you can’t reorganise them, or re order them or make comments on them… So I make a couple of new audio tracks (3 C and 2 LR) and name them LIB 1 – LIB 5 and I drag all of the 18 source files on to those tracks, but further along the timeline, so they are away from my video & edit session.

I drop a marker on the first library sound and edit the marker number to 99, so as to make a mental note and I can then instantly jump back here at any time later on, by hitting 99. on the numeric keypad.

Quickly skipping through the files I listen and then reorder the sounds, putting the bright harsh sounds on one Lib track, the dull sounds on another & the misc on a third track.

The first actual sounds I feel like doing  is to create some basic background ambiences for the two shots. I have a quick listen through the source material, select a few files and start cutting them as a generalised distant layer of ambiences, shot 1 layers on AMB1 tracks (feeding AMB1 stem) and shot 2 elements on to AMB2 tracks (feeding AMB2 stem)

Shot 1 we are out at sea, so the general ambience will be fairly inactive – just a diffuse gentle roar, no waves are actually breaking & the shoreline is too far away to worry about.

Shot 2 we are on a gravel beach, so it can be a little brighter.. but there are a lot of other elements to go into that shot so I dont sweat the details too much.

I select some mono files and create a stereo ambience by offsetting parts of the same file, place them & edit them to length paying particular attention to the start of shot 1 (I don’t want the ambiences to start too abruptly or bump in too hard) but I do want the shift from shot 1 to shot 2 to feel dramatically different, so I make that a fairly hard cut. I set some relative levels using volume automation and I now have a very basic ambience bed to work against & layer elements over.

40 minutes have passed. No rocket science as yet…

Next I am going to generate some variations of the passby for shot 1 – I check the timing, duration is 12 seconds with a 5 second approach and 7 second away. I like the way the wave is not breaking but almost bubbling, and quiet turbulent. The processing I try I will apply to a few different source files, just so I have some options.

The processing I try I will apply to a few different source files, just so I have some options.

Obviously I need to automate a LPF but I will do that in sync and in place via automation. First I am going try creating some new turbulent material using GRM Shuffling plugin

And then I will try putting the results through a Doppler plugin, to get a natural sounding passby… bearing in mind I need at least 5 second approach & 7 second away… But I will also try some faster, shorter passby variations just for detail when the wave pass is close to camera.

I also process some sounds through impulse responses using Avid Space plugin… and try dopplering some of those… I also generate some nice movement using Cytomic The Drop filter plugin (my favourite filter) playing with the LFO speed mapped to LPF frequency…

After I feel like I have enough source material, I start cutting in sync to picture, with elements sorted by the role they are playing – busing source tracks to stems based on Ambience 1, Ambience 2, FXA – passbys/moves, FXB – sub impacts, FXC – surges and FXD – splash spray … and as I work I also start automating, mainly drawing volume graphs & automating LPF frequency – my aim is to have no ‘static’ or constant sounds at all, other than distant ambiences…

The movement in the first shot is fairly straightforward, and I track the pass elements across the screen, while still maintaining distant ambiences (its only one wave passing by, not the entire ocean)

Movement in the second shot is far more complex, and I solo elements as I pan, often locating and drawing pan automation by hand with the pencil tool…

Automating The Drop filter LPF I try to be careful that you cannot hear or perceive a frequency sweep – too much resonance and it gives it away….

Once I have a good balance, I solo the elements feeding each stem, to see how they are working and that there is no redundancy/unnecessary elements which might mask other more important elements. I prefer to work in stems as it allows me to logically group elements &rebalance the group using stem automation.

I also know from experience that for example, in a real film mix these shots might well have score across them, and I want to help the re-recording mixers by giving them easy access to elements. I could imagine if a director deemed the FX to be ‘too noisy’ much might be pulled back in level, but eg allowing the wave impacts & spray to easily accentuated would make their & my life easier…

I am not a fan of using automation instead of actual editing – if there is an impact, it should be from a seperate element that is only contributing impact, not a part of a general track that is getting a push in automation at that point to create impact.

I put a limiter across the final mix bus and print a version of my mix so I can listen with fresh ears the next day – my session is now looking like this:

I do some more tweaks, and then output a mix and embed it to the video and upload it to Youtube. I then watch & listen to the version on Youtube. There is a very basic but VERY IMPORTANT rule of post production:
If something is leaving your facility or studio, you MUST do a reality check & final Quality Control, verifying that the delivery version matches your final version. If you skip this step you fail at post production.

I was really surprised that two people submitted Youtube links to me that clearly had faults – one was entirely mute, and another had second long silences in it. I offered both people a second chance as I wanted to hear their work as they intended it. But… if they do not learn a lesson from this incident, the next time they do something like that it could cost them their job, as it might be a director or producer who does not give you a second chance.


Final comments: this was an interesting challenge, and while my mix & panning could do with more work, I am ok with the end result… Of course it will never play this loud in a film, but having elements split to stems it would be easy to rebalance the mix without having to get amongst the source tracks….

I have a clear idea for the next SD Challenge – it wont be for a month or two,a s it will be for a library I have not started recording yet. But the source material will lend itself to music as much as sound design, so I look forward to hearing what you do with it!