Someone on the VI Forum/SFX asked a question about sound library formatting, specifically about the choice between [1 take per file] and [multiple takes per file]
“One oddity I’ve run into is that sometimes, a wav file in a pack may offer multiple variations of a sound/oneshot/effect within the same file with maybe a second of spacing. Is this usual? The only reason to make use of such a file is to choose a starting point programmatically in software, but I don’t see why you wouldn’t just cut up your variations into multiple files.”
As I had to think through all angles of this question ten years ago when deciding how to deliver the first HISSandaROAR sound library, I wrote a stream of consciousness reply and figured I’d post it here as it may be useful to others…
The short answer is: who is your target user and how do they prefer it?
You mention ‘one-shots’ which is a music term, and not a sound FX/sound design term, so maybe you are talking about music samples and not SFX? I am referring to SFX, since music samples are usually used either via VIs (where individual sounds are not even accessible) or via auditioning & loading singular sounds or presets into a sampler or plugin which is a totally different use case to SFX.
The longer answer: In my experience as both a user, and a library developer the reason for not delivering SFX libaries as [1 take per file] is due to a couple of different and important reasons:
First and very important, that approach does not scale. Second is due to the typical workflow of how SFX are used. So you need to be very clear on the use case. For example, if I search my music sample library in SoundMiner, all the ‘808 kick’ are single takes per file, because that is how they are used by a musician. But if I search my SFX library for ‘punch’, none of the punches are delivered as single take per file. (A “one shot” is a music term and I would expect it to be one take per file.)
Why does a separate take per file not scale, for sound effects?
A simple example, my personal SFX and AMB library has over 500k sounds in it. If those sounds were broken out into separate files for every take, my library would not be 500k sounds, it would be more like 500 million and when I searched for ‘METAL IMPACT’ in SoundMiner I would get 100,000 hits and auditioning my way through all of those is simply not viable – imagine it! This problem won’t be apparent while you work on your own library, but as soon as your library is added to a users personal library containing hundreds of thousands of other sound files, it will become very, very apparent. It’s a similar reason why file names and metadata are so important – on their own, a single library is no problem, but add it to a larger library with thousands of other libraries and if your sounds can’t be efficiently found and identified, they will not be used. But again I mean SFX, not music samples.
The workflow of most professional sound editors & sound designers involves a sound library app (SoundMiner, Basehead, AudioFinder)which makes it very easy to transfer part of a file. So for example, if you audition a file of 20 punches and only want take 3, simply select take 3, transfer & done! In SoundMiners case, the silence between takes can also be used to auto split and load discrete takes into Radium sampler, and the same would apply to many other samplers.
When working in a “linear sound FX editor” fashion, it is also efficient to import a single composite file of 20 punches that you like and want to use and as soon as you have used the first punch you can access variations instantly, avoiding any repeated sounds. Rather than going back and importing another very short soundfile, you can simply stay in your edit session and move to the next take/s within the composite file.
But this approach only applies to variations/takes of the same sound. For AMB libraries every take is usually a different location or time (eg AMB city skyline 1, AMB city skyline 2) and they would are better as separate files because they are not ‘take variations,’ they are entirely different locations.
MAKING BOTH OPTIONS EASY:
Some people (especially game audio) may prefer one file per sound, especially when implementing them. With a composite file (X takes in a single file, separated by silence) if someone does prefer 1 take per file, then they can very, very easily split & output that as they wish, due to the silence between takes.
For example in ProTools use strip silence, export, done. Every DAW has such options. But if the reverse is delivered, one take per file, they would have to import 20 seperate files, space them a second part, combine them into a composite file and export it as a single file, likely losing all the metadata along the way.
Considering the various likely use cases, and also thinking how you as a user prefer to work, is what should inform your thinking. While some people might think there isn’t much difference between a music sample library and sound FX library, some very important differences are as per the very question you ask. Also, the use of metadata (absolutely crucial for sound FX/design use) along with consistent file naming, bit & sample rates etc. differ vastly between the two use cases…