Monthly Archives: June 2017

Sport Mode

Yesterday I drove 2 hours over to my favourite exterior recording location – low tide & no wind meant the conditions were perfect for recording my DJI Mavic Pro. The Mavic is a drone physically much smaller than my DJI Phantom 3 Pro, but being a newer generation has longer battery life (27min), better range (7km) and… sport mode!

In normal use the Mavic feels a little bit sluggish, but I recorded 3 batteries worth of moves, passbys, hover etc in normal mode…

And then for the first time I switched to sport mode and holy sh+t! The Mavic suddenly became a LOT more animated.
Sport Mode allows the Mavic to fly a lot faster and also makes the controls far more responsive… While it is designed for racing, it is also perfect for performing (& recording) more aggressive moves!

These new recordings will be a free update to our SD024 DRONE UAV QUADCOPTER Library – release next week

SD Challenge 01 – My Approach

For anyone late to the party, the first HISSandaROAR SD Challenge required creating sound for a short video – two shots of waves – with the restriction that the only source material allowed was a small selection of provided noise samples. Here is a short burst from each of the provided sound files:

This is my approach to it… FWIW

Many years ago I worked with an older re-recording mixer who during an ambience predub one day sardonically commented “ambiences are basically just filtered white noise…” We were joking around, but ever since I have wondered about that comment. On one hand it feels a bit like those people who see a photo shot on film and comment ‘Why bother with film – I could do that in Photoshop!’ without ever realising how misguided their comment is. The issue is not whether you ‘could’ do something, it is whether you actually do it or not. Similarly I could become an astronaut, but it is a meaningless & frankly delusional statement as there is no achievement in hypothesising about what one could do.

But still I wondered: could someone really create realistic ambiences using nothing but filtered white noise?

When I launched this first Sound Design Challenge I actually wondered if it was really achievable. Would all the entries be unrealistic and unpleasant to listen to? Within ten hours of launch the second entry eliminated those fears, with a soundtrack that would be a challenge to distinguish from the real thing in a blind A/B test. And it went on to provide a bench mark that few surpassed. Well done Nicolas Titeux!

I don’t know if you have ever watched those music creation videos by FACTmagazine AGAINST THE CLOCK? Basically a musician is given ten minutes to create a piece of music from scratch… It is an interesting concept, and while this first Sound Design Challenge is fairly relaxed with its timeframe, launching on 19th May and ending on June 1st, I think it is important to remember that regardless of your experience this SONIC TRUISM will always apply:

IT IS NOT “HOW GOOD ARE YOU?”,
IT IS “HOW GOOD CAN YOU BE, IN THE TIME ALLOWED?”

If you really really wanted to, you could work 12 hours a day for the 14 days of the challenge. There would be a fairly good chance the end result would be very good, but it is totally unrealistic. If someone hired you to edit ambiences for a film based on having seen your finished soundtrack, and you worked at the same pace, you would likely be fired by week 2.
So with each of these challenges I am interested in how much time it took you to complete the work. And to gauge the relative merits of your answer I felt I had to complete the same challenge:
How good could I be in the time allowed?

START THE CLOCK!

Before I actually start doing any work I am going to consciously estimate the time involved in this task, so I know if I am running behind schedule or spending too much time on a detail, since the end result is all that matters. Having already watched the video a few times I am pretty sure I can get a first version done in a couple of hours. I would then put it aside, and revisit with fresh ears, refine the elements and then output a mix. I would then revisit it a third time, again with fresh ears, check the existing mix, make some changes & output my final version. Total time estimate = 4 hours.

OK step 1 – download the source material!

Decompress the rar file and first thing I then do is open the video file in Quicktime Player and Get Info on it – what format & frame rate is it?
OK its 720p 23.976fps H264 MP4. As anyone who works in post knows, MP4 is a delivery format NOT a working format (due to it using inter frame compression playing forwards is ok, playing backwards or scrubbing is very slow as it has to reconstruct each frame from previous frames).
I boot up MPEG Streamclip app and convert the video to a codec that uses discrete frames and which works well in ProTools, Avids DNxHD format. The original H264 MP4 was 113MB, in DNxHD its now 1.24GB. I copy it to my SSD and the audiofiles across to my audio work drive, a GTech RAID. Time to set up a ProTools session

At this stage many people would open from a saved template, but I will quickly assemble a simple session. My usual working format is 24 bit 48k, I then import the video and the guide track audio (for the 2 pop) to the start of the session, and then set the sessions timecode start to match the burnt in timecode of the video 01.00.00.00, also checking my timecode timeline matches the video frame rate of 23.976fps.

The final video soundtrack is only stereo, so I create some stereo stem tracks (AMB-A, AMB-B, FX-A, FX-B, FX-C, FX-D) plus a master stereo MIX track. And then I create some audio tracks to feed each AMB stem: 2 C and 2 LR tracks for each, and source tracks for FX: 4 C and 6 LR for each stem. I quickly name my tracks.

I also create some processing audio tracks, for printing plugins. I quickly do all the bus-ing, and set the stems and master track to input monitoring. Then I import the 18 source audio files and I am ready to start work.

Setup time: 15 minutes

Before I listen to any sounds, or start doing any sound design I watch the video a few times and think about it, analysing it. Ok so there are two shots – they are clearly shot at different times of the day and in different locations. The first is a drone shot taken side on to the direction of the waves, out to sea a bit with rolling waves passing camera. The second is shot from the beach, with waves crashing on to rocks. Given the perspective and point of view of the camera these two shots should sound & feel quite different to each other – the first is almost gentle, while the second is more powerful – the wave fair pounds that rock and a lot of water pours in around the rock. I also notice all the weird back waves – that second shot is going to take a lot of elements and a lot of panning!

I think about how I want the two shots to feel

Shot 1 is quite beautiful and a fairly unique perspective – it feels like late afternoon, almost dusk. It looks quiet peaceful.

Shot 2 feels a little dangerous. If I stood down in front of that rock I would definitely get wet and possibly knocked off my feet.

I want to feel some major contrast between the two scenes…

I also start to think about all the elements I am going to need
– doppler/passby elements for the first shot
– wave breaks
– wave impacts rock, big & smaller
– wave drags across gravel
– wave/white water surges
– wave meets wave

I notice like all good picture editors, this video has been output correctly with a SMPTE leader, with 2POP at 01.00.06.00 and FFOA at 01.00.08.00

(side note: some people entertained themselves designing sound for the SMPTE leader, maybe fun but for all intents & purposes a waste of time. A SMPTE Leader exists solely to verify sync.)

Next I mark up the video with sync points, as I like to find the hit points once accurately & then I never need to manually find sync again as I have a non-linear grid of markers to work to

So FFOA is start of shot 1 and the start of my ambiences, I drop a marker at
01.00.08.00 Shot 1
I quickly find start of shot 2 and place a marker at
01.00.20.15 Shot 2
I find LFOA and place a marker at
01.00.40.00 LFOA
Now to find the main sync points within the shots.

Shot 1 I drop a marker when the wave is dead centre of its passby at 01.00.12.22 wave pass centre

Shot 2 I drop markers:
01.00.24.05 first evidence of wave impact rock
01.00.25.15 first spray lands on Left
01.00.26.10 first white water floods in L
01.00.27.06 wave impacts small rock on R
01.00.28.04 wave impacts small rock centre
01.00.31.23 reverse wave bounces back from front C
01.00.34.10 reverse wave swamps small rock C
01.00.35.14 reverse wave swamps big rock
01.00.36.03 second wave breaks out on L
01.00.37.08 small waves meet & interact C
01.00.38.15 small waves meet & interact L

Watching the video again in real time I also notice something fly across the screen, and again a few a little later, I drop more markers (not sure these will rate, but noted)

01.00.27.20 insect passby C
01.00.31.18 insect passby C
01.00.34.03 insect passby R
01.00.37.10 insect passby C

OK I have finished spotting the film – I have sync markers for all action and a clear idea of how I want the end result to feel and what I need to create from the source material… I’m 30 minutes in and it is now time to get busy!

I don’t know about you, but I prefer to have all the source material available on tracks. When they are all sitting in the Region Bin you can’t reorganise them, or re order them or make comments on them… So I make a couple of new audio tracks (3 C and 2 LR) and name them LIB 1 – LIB 5 and I drag all of the 18 source files on to those tracks, but further along the timeline, so they are away from my video & edit session.

I drop a marker on the first library sound and edit the marker number to 99, so as to make a mental note and I can then instantly jump back here at any time later on, by hitting 99. on the numeric keypad.

Quickly skipping through the files I listen and then reorder the sounds, putting the bright harsh sounds on one Lib track, the dull sounds on another & the misc on a third track.

The first actual sounds I feel like doing  is to create some basic background ambiences for the two shots. I have a quick listen through the source material, select a few files and start cutting them as a generalised distant layer of ambiences, shot 1 layers on AMB1 tracks (feeding AMB1 stem) and shot 2 elements on to AMB2 tracks (feeding AMB2 stem)

Shot 1 we are out at sea, so the general ambience will be fairly inactive – just a diffuse gentle roar, no waves are actually breaking & the shoreline is too far away to worry about.

Shot 2 we are on a gravel beach, so it can be a little brighter.. but there are a lot of other elements to go into that shot so I dont sweat the details too much.

I select some mono files and create a stereo ambience by offsetting parts of the same file, place them & edit them to length paying particular attention to the start of shot 1 (I don’t want the ambiences to start too abruptly or bump in too hard) but I do want the shift from shot 1 to shot 2 to feel dramatically different, so I make that a fairly hard cut. I set some relative levels using volume automation and I now have a very basic ambience bed to work against & layer elements over.

40 minutes have passed. No rocket science as yet…

Next I am going to generate some variations of the passby for shot 1 – I check the timing, duration is 12 seconds with a 5 second approach and 7 second away. I like the way the wave is not breaking but almost bubbling, and quiet turbulent. The processing I try I will apply to a few different source files, just so I have some options.

The processing I try I will apply to a few different source files, just so I have some options.

Obviously I need to automate a LPF but I will do that in sync and in place via automation. First I am going try creating some new turbulent material using GRM Shuffling plugin

And then I will try putting the results through a Doppler plugin, to get a natural sounding passby… bearing in mind I need at least 5 second approach & 7 second away… But I will also try some faster, shorter passby variations just for detail when the wave pass is close to camera.

I also process some sounds through impulse responses using Avid Space plugin… and try dopplering some of those… I also generate some nice movement using Cytomic The Drop filter plugin (my favourite filter) playing with the LFO speed mapped to LPF frequency…

After I feel like I have enough source material, I start cutting in sync to picture, with elements sorted by the role they are playing – busing source tracks to stems based on Ambience 1, Ambience 2, FXA – passbys/moves, FXB – sub impacts, FXC – surges and FXD – splash spray … and as I work I also start automating, mainly drawing volume graphs & automating LPF frequency – my aim is to have no ‘static’ or constant sounds at all, other than distant ambiences…

The movement in the first shot is fairly straightforward, and I track the pass elements across the screen, while still maintaining distant ambiences (its only one wave passing by, not the entire ocean)

Movement in the second shot is far more complex, and I solo elements as I pan, often locating and drawing pan automation by hand with the pencil tool…

Automating The Drop filter LPF I try to be careful that you cannot hear or perceive a frequency sweep – too much resonance and it gives it away….

Once I have a good balance, I solo the elements feeding each stem, to see how they are working and that there is no redundancy/unnecessary elements which might mask other more important elements. I prefer to work in stems as it allows me to logically group elements &rebalance the group using stem automation.

I also know from experience that for example, in a real film mix these shots might well have score across them, and I want to help the re-recording mixers by giving them easy access to elements. I could imagine if a director deemed the FX to be ‘too noisy’ much might be pulled back in level, but eg allowing the wave impacts & spray to easily accentuated would make their & my life easier…

I am not a fan of using automation instead of actual editing – if there is an impact, it should be from a seperate element that is only contributing impact, not a part of a general track that is getting a push in automation at that point to create impact.

I put a limiter across the final mix bus and print a version of my mix so I can listen with fresh ears the next day – my session is now looking like this:

I do some more tweaks, and then output a mix and embed it to the video and upload it to Youtube. I then watch & listen to the version on Youtube. There is a very basic but VERY IMPORTANT rule of post production:
If something is leaving your facility or studio, you MUST do a reality check & final Quality Control, verifying that the delivery version matches your final version. If you skip this step you fail at post production.

I was really surprised that two people submitted Youtube links to me that clearly had faults – one was entirely mute, and another had second long silences in it. I offered both people a second chance as I wanted to hear their work as they intended it. But… if they do not learn a lesson from this incident, the next time they do something like that it could cost them their job, as it might be a director or producer who does not give you a second chance.

QUALITY CONTROL = ALWAYS REALITY CHECK YOUR WORK!

Final comments: this was an interesting challenge, and while my mix & panning could do with more work, I am ok with the end result… Of course it will never play this loud in a film, but having elements split to stems it would be easy to rebalance the mix without having to get amongst the source tracks….

I have a clear idea for the next SD Challenge – it wont be for a month or two,a s it will be for a library I have not started recording yet. But the source material will lend itself to music as much as sound design, so I look forward to hearing what you do with it!

SD Challenge 01 Winners

For anyone late to the party, the first HISSandaROAR SD Challenge required creating sound for a short video – two shots of waves – with the restriction that the only source material allowed was a small selection of provided noise samples. Here is a short burst from each of the provided sound files:

I will be the first to admit that this HISSandaROAR SD Challenge was difficult. Even for a seasoned sound effects editor or sound designer this challenge would have required some clever work to achieve a great result.

To appreciate what was involved, I also thought it only fair if I did my own challenge – you can read about it & hear it here

After listening to all 76 entries, I narrowed it down to these 25 that I like the best, so each of these 25 people will receive a free copy of our new sound library: SD030 NOISE SOURCE Library

Congratulations to Nicolas Titeux, Harel Tsemah, Olga Bulygo, Matthew Simonson, Nicolas Roulis, Baptiste Quemener, Adam Primack, John Grzinich, Ben Kersten, Ben Swarbrick, Julius Kukla, Stuart Ankers, Alex Gregson, Nicolas Maurin, Julienne Guffain, Ali Tocher, Richard Shapiro, Nick Petoyan, Gerard Gual, Sam Rogers, Nils Vogel-Bartling, Kosma Kelm, Dave Pearce, Jeffrey Mengyan, Vincent Fliniaux. Check out their work below…

Interested in entering our next SD Challenge? Join the mail list here

Nicolas Titeuxvideo
Location: France, south
Links: www.nicolastiteux.com
Main Tools: Protools, little EQs and of course faders
Time involved: 1 hour
Comments: “I found that each provided sound had a specific tone. I just EQed some a little to make them more contrasted. Them I tried to imagine what the shots sound would like in real life. I used some sounds for the background, some for the close waves and some for the distant ones. The rest of the work was to give life to those continuous sounds with my faders, for example first bring some low end when the wave is distant, then make it come closer with some mid-range sounds and add high-frequency details when the wave passes close.”

Harel Tsemahvideo
Location: Tel Aviv, Israel
Links: hareltsemah.com https://soundcloud.com/hareltsemah
Main Tools: For this type of projects I’m using mainly Fabfilter Plugins: EQ, Compressor & Limiter. For the moving parts I was using volume automations and saturators. The work process was a great time to experiment and test my skills with using only these noises.
Time involved: I’ve spent approximately 2 hours creating this piece
Comments: “My global thinking during the work process was how to make it sound natural and realistic but also keep in mind that i’m using analog/digital noises and find the right balance between those two worlds.”

Olga Bulygovideo
Location: Vilnius / Lithuania
Links: http://olgabulygo.wordpress.com 
Main Tools: EQ, Altiverb, iZotope Ozone
Time involved: around 2-3 hours
Comments: “I tried to use all the sounds available in the library, so I uploaded them all to the project, then made a stereo layer to combine the two shots (and black areas), then spent some time playing with sync and pan of the waves (which were pretty complex in the second shot).”

Matthew Simonsonvideo
Location: Denver, Colorado, USA
Links: musicinobjects.com matthewsimonson.bandcamp.com
Main Tools: Ableton Live, NI Transient Master, Strymon Big Sky and Timeline
Time involved: Roughly 3-4 hours
Comments: “I focused mainly on the directionality of the waves and only ended up using two of the noise samples for the waves.  The noise samples that had some tone in them were sculpted into synth-like sounds.  That part was the most fun.”

Nicolas Roulisvideo
Location: France
Main Tools: My main plug was only an eq for low pass and high pass
I have no secret: layering, good ears and reflection (and comp and reverb to consolidate all of this)
Time involved: I Spent 2 hours on it

Baptiste Quemenervideo
Location: France
Links: a game that I made for the last ludum dare: here
Main Tools: ProTools, basic channel strip, Wwave TrueVerb, and guitar amp
Time involved: 6 hours
Comments: “I always like the idea to create or imitate sound from scratch by giving the impression or the illusion of a natural phenomenon. This is what i tried with your video. The first part is more an impression of a wave. I used my amp guitar to reamp white noise and then basically having the convoluted results of a spring reverb and resonant filter of whawha pedal. giving this really tonal wave passing by. The second part is more an imitation process to create the illusion by going into details and recreate everything we can expect of a wave crashing on a rock (depth of field, impact, droplets of water…) but without the “real” sound. for the film leader well basically it is always fun to create strange sounds.”

Adam Primackvideo
Location: Los Angeles, CA, USA
Links: aprimack.com vscrlfx.com
Main Tools: I used lot of EQ and Reverb, but the shining star is an unreleased software that I was recently given called Transformizer.
Time involved: five or six hours cumulatively
Comments: “Transformizer reads behavioral characteristics – pitch, amplitude, and formant – of one sound and can apply those readings to a different sound. Following the amplitude and formant of a wave (or seagull!) recording in real time was a sneaky and successful trick.”

John Grzinichvideo
Location: Estonia
Links: http://maaheli.ee/main/
Main Tools: Adobe Audition. Pretty much used FFT filters for the wave noise, some EQ and a bit of delay. I exported the audio master then compressed for youtube in Adobe Premiere.
Time involved: 2.5 hours
Comments: “I took this challenge to brief myself on using Adobe Audition and it was perfect for that. Also, I actually have used pink noise files for the basis of some electronic music projects, but never sound design, so I liked the idea of this challenge. Thanks for offering this and I look forward to working with your library.”

Ben Kerstenvideo
Location: Seattle, Washington, USA
Links: https://clatterdin.com/talent/#staff http://iambenkersten.com/
Main Tools: Pro Tools. Main plugins used – SoundToys suite (a LOT of filter freak), Sonnox EQ, Dynamics and Reverb, Native Instruments Kontakt 5. I also used Audacity so I could ‘paul stretch’ a few things
Time involved: Roughly 5 Hours
Comments: “I mostly approached this like a film, but I did cut some corners regarding organization and general conventions as I knew no one else would be opening my session. First thing I did was to simply get some sounds on the timeline with a rough balance of how I wanted the two scenes to play against each other and then tweak it from there. I approached the sound design as a somewhat abstract piece. I was more concerned with getting an emotion across as opposed to making it sound ‘realistic’.”

Ben Swarbrickvideo
Location: Toronto, Ontario, Canada
Links: www.graysonmatthews.com
Main Tools: Both Laura and myself run Pro Tools on the Mac platform, and I believe we both used mainly the standard Digi plugins for this project (the delays may have been from a WAVES bundle)
Time involved: 5 hours. My assistant (Laura Titchner) did two hours of sound design, I did two hours of sound design, and I spent an hour sewing it all together and mixing it.
Comments: “I really enjoyed the challenge and hope you do this again..”

Julius Kuklavideo
Location: United States
Links: www.kuklaproductions.com
Main Tools: I really only used D-Verb and the basic Pro Tools 7-band EQ for this project. I also did a bit of time/pitch shifting with the standard Pro Tools Time Shift plug-in.
Time involved: I spent around 5 hours on this project.
Comments: “I have finally decided, because of this project, that I enjoy pink noise the most, with red noise being a close second.”

Stuart Ankersvideo
Location: England
Links: www.filmandtvpro.com/uk/crew/profile/stuart-ankers
Main Tools: Audacity (Paulstretch / sliding time/scale/pitch shift) Pro Tools 11 (Varispeed elastic audio) Pitch shift + layering
Time involved: 3-4 hours
Comments: “I wanted to approach the sound design in a musical sense rather than the straight forward replication of the waves – I wanted to see how this would fit with the visuals.”

Alex Gregsonvideo
Location: Manchester, England
Links: facebook.com/344audio 344audio.com
Main Tools: I kept it simple, using mainly stock plugins from Pro Tools, but also use Waves Soundshifter, NI Molekular, NI Skanner XT
Time involved: 4 Hours
Comments: “I originally completed just the diegetic sounds for the shots, but then decided to push it further with abstract drones and techno music made from the given sound library.”

Nicolas Maurinvideo
Location: Paris, France
Links: www.nicolas-maurin.com
Main Tools: Pro Tools with Fab Filter Pro Q2 (lot of automation EQ), Saturation, LoAir, FilterFreak, Reverb & Delay (mostly Altiverb and Valhalla ones)
Time involved: around 6 hours I guess on multiple days when I had time
Comments: “I started by designing around fifteen sounds by filtering the imposed sounds in a whole chain of reverbs, delays and random plugins. I tried to create a maximum of sound to cover low frequencies to fit the bottom and more high noises for designing wind, foam,…., trying to have a maximum of texture. Then it was essentially editing, particularly with the clip gain of Pro Tools, and automation EQ/Filters and volume.”

Julienne Guffainvideo
Location: NYC, USA
Links: https://vimeo.com/user7942268
Main Tools: I used mostly Sound Toys, GRM Tools, DMG EQs, + a lot of LoAir
Time involved: According to my session file backups I had the project open for a total of about 8hrs over the course of several days. I would try some things and walk away, come back, revise, etc!
Comments: “I found the connection between white noise and water an obvious pairing, so I felt challenged to go beyond that concept.  Creating movement and carving space with multiple white noises was also a good challenge!”

Ali Tochervideo
Location: NZ (living in UK)
Links: www.hellolooklisten.com
Main Tools: DAW: Reaper, Plugs: Clip 1 was with Zynaptic Morph / birds were with Adobe Audition pitch bender, Clip 2 was granular synthesis with The Mangle by Sound Guru
Time involved: 3-4 hours
Comments: “I felt like being a native coastline dweller was a considerable advantage here! Straight away I knew which tools I would use for each section, so I was able to get right to work. The first shot suited Morph because I was able to use the X/Y controller to bring the main wave across the screen effectively. The second clip needed the ability to dynamically raise and lower the impact, intensity and general level of chaos, so The Mangle was an obvious choice for this for me. The birds were inspired by my Brighton soundscape, seagulls are omnipresent here. I had many recordings of them so I analysed the spectrogram of one of my recordings and then carved this out of the tonal sample provided.”

Richard Shapirovideo
Location: Los Angeles, California, USA
Links: IMDB
Main Tools: 95% Wave Warper by Sound Morph. I used S-Layer by Twisted Tools for the countdown leader and the jingle at the end.
Time involved: 3 hours

Nick Petoyanvideo
Location: Los Angeles California, USA
Links: IMDB vimeo portfolio
Main Tools: EQ 7 Band, Pitch Shift II
Time involved: Six Hours
Comments: “Interesting how your mind can be convinced that what it’s hearing is, in fact, something else. Great learning experience.”

Gerard Gualvideo
Location: Barcelona, Spain
Links: recent music work doco1 doco2 doco3 doco4
Main Tools: Steinberg Nuendo 7
Time involved: It took me an hour to do all the work, the editing and mix
Comments: “I really enjoy doing this edition with the sounds it gave us. A challenge :)”

Sam Rogersvideo
Location: Adelaide, Australia
Links: www.boomroomfoley.com IMDB
Main Tools: Pro Tools 12 HD. Stock standard PT plugs – EQ III, Pitch II, Sci-Fi and Vari-Fi.
Time involved: All up, a good day or so mucking about with it.. I’ll say 10 hours at a guess
Comments: (Man, I wish producers would give us 10 hrs per minute)

Nils Vogel-Bartlingvideo
Location: Berlin, Germany
Links: www.studioamfluss.de www.filmtondesign.de
Main Tools: DAW: Pro Tools 11 HD, Fabfilter Pro Q 2, Spanner, Phoenix Reverb, for the wind I used GRM Tools Doppler and Zynaptiq Adaptiverb
Time involved: 6-8 hours editing/processing/trying

Kosma Kelmvideo
Location: Poland
Links: kelmsound linkedin
Main Tools: Because I work in Reaper, I used mostly Cockos’ internal plugins, from which EQ did most of the job. I utilized few instances of Softube Saturation Knob and JS Exciter enhancer as well, to get richer high end in several samples
Time involved: The challenge took me one 4 hour session to establish the core and couple of short evening sessions to test different approaches/ideas and do the mixing
Comments: “To give the sounds more natural feeling and achieve interesting, evolving textures, I used loads of automated pitch shifting and EQing, along with volume/pan envelopes manipulation.”

Dave Pearcevideo
Location: Kent, United Kingdom
Links: http://dpsoundesign.com/
Main Tools: Logic Pro X, Logics equaliser, Logics compressor, Logics sampler. (nothing fancy, simplicity is the way forward for me)
Time involved: An evening
Comments: “I really enjoyed do the sound for the film, it reminded me that sound can be used to enhance visuals and drastically support the viewers immersiveness. I wish there were more sound design competitions out there, it’s a great way to connect sound designers and appreciate how different people approach it, something that is perhaps lost due to the reclusive nature of the work of sound designers.”

Jeffrey Mengyanvideo
Location: Ann Arbor Michigan, USA
Links: https://sites.google.com/site/jeffmengyan
Main Tools: Reaper, Kontakt 5 (Seagulls), Very light EQ and lots of automation for everything else.  That’s about it.
Time involved: 2-3 hours total spread out over a few days
Comments: “The first shot felt unusually still and somewhat lifeless yet still pleasant.  The second felt like the rock supposed to represent something not-so-friendly with it’s extra dark center frame domination, slight tilt of the horizon, and lighting direction.  While going for a pleasant beach sound, it seemed fitting to have some kind of rise at the end to better reflect the sinister look that I felt it represented.”

Vincent Fliniauxvideo
Location: Montreal (Canada)
Links: http://www.vincentfliniaux.com
Main Tools: Reaper mainly Fabfilter, Pro Q-2 also Valhalla Ubermod, Valhalla Shimmer & Eventide Blackhole
Time involved: Around 3 hours
Comments: “I first start by generate a lot of variations of noises processed with different bandpass filters and then apply an FX chain made of a lot of moving diffused delays + reverb. Everything else is only about selecting the good processed sound, editing, panning and mixing. I learn a lot doing the work.”