-
Notifications
You must be signed in to change notification settings - Fork 46
Example : working with sample based content
This page looks into a practical example in sequencing sample based content using nothing but the MWEngine's core library. You can easily extend the core classes for creating your custom glitchy triggers, or to create an element of randomness (i.e. randomly choosing between different sample types to avoid a static sounding retrigger, for instance: different snares).
Recommended reading concerning MWEngine classes : SampleEvent
All samples are to be loaded into the SampleManager and given a unique key. For instance : if you're to create a drum machine with alternate kick and snare patterns, you simply have one sample of each available inside the SampleManager. The AudioEvents that reference these samples simply point to the respective AudioBuffers inside the Samplemanager and thus consume no extra memory for each individual "drum hit".
You simply register a sample like so:
SampleManager::setSample( std::string uniqueID, AudioBuffer* audioBuffer, unsigned int sampleRateOfSample );Where uniqueID is the unique identifier used to retrieve the sample from the SampleManager, audioBuffer is the AudioBuffer containing the sample data and sampleRateOfSample describes the original sample rate of the sample in Hz. The SampleManager will now map the given AudioBuffer to the string identifier, which you can now retrieve by simply invoking :
AudioBuffer* sampleBuffer = SampleManager::getSample( "uniqueID" );NOTE : If you're using the engine solely from Java without writing additional C++ code, you might want to look at the JavaUtilities to see how you can easily create AudioBuffers inside the SampleManager.
In the not wholly unlikely event that you wish to use audio files as the source of sampled content, you can consider two approaches :
one is to load raw audio resources from the /res-folder of your application, meaning you must use JNI to transfer their content to the native layer. The other is to load them from the device storage (internal / SD / other applications, etc.) directly via the filesystem.
MWEngine does NOT supply classes to directly open any kind of audio format. There are many open source libraries available that will open WAV, AIFF, FLAC, etc. It's up to you to implement these alongside the MWEngine-library. What you need to do is get these libraries to open the file of choice and extract the uncompressed, raw audio data buffer(s) from these files and store them inside an AudioBuffer-instance, remembering that an AudioBuffer represents uncompressed multi-channel audio without a file header (it is basically a headerless WAV).
Is the base class for all AudioEvents that differs from its parent BaseAudioEvent in that it supplies a method to set an external source as the events AudioBuffer, along with different methods to allow range-based playback (for instance to only play parts of the buffer in a loop, for auditioning, etc.).
However, before we can create a new SampleEvent instance, we first need an instrument it should reference.
So how do we (quickly) get sequenced sound going ?
In order to play sequenced samples, you will need a SampledInstrument, which is basically a class that will hold SampleEvents in an event queue which can be read by the Sequencer. Additionally the instrument holds a unique AudioChannel (meaning the instrument has its own "mixer track" and ProcessingChain). We create one like so:
SampledInstrument* instrument = new SampledInstrument();Now that we have an instrument (which has automatically registered itself into the sequencer), let's get back to our exercise by keeping the drum machine-example in mind; let's say we'd like to create a SampleEvent that should sound the snare sample. Let's start by constructing the SampleEvent instance:
SampleEvent* snareEvent = new SampleEvent( instrument );Now we retrieve the snare sample from the SampleManager by referencing its unique identifier and set it as the source for the SampleEvent, like so:
snareEvent->setSample( SampleManager::getSample( "snare" ), SampleManager::getSampleRateForSample( "snare" ));Finally, we make it eligible for playback by adding this new event into the sequencer.
snareEvent->addToSequencer();As we can play back multiple samples in the same instrument (and thus share its AudioChannel and ProcessingChain) we'll add another SampleEvent to play a kick sound:
SampleEvent* kickEvent = new SampleEvent( instrument );
kickEvent->setSample( SampleManager::getSample( "kick" ), SampleManager::getSampleRateForSample( "kick" ));
kickEvent->addToSequencer();If you would now start the sequencer (see example Activity in the Github repository), you should hear both the kick and snare on the first beat of the bar (note by setting "setLoopable" of a SampleEvent to true, it would keep repeating once it has finished playing a single buffer, essentially creating a "loop" that repeats regardless of sequencer position, ideal for drones, etc.).
To get something more musical going, let's enhance this example by timing the events :
By default the engine sets the Sequencer tempo to 120 bpm and the default amount of measures/bars is set at 1. If your devices' preferred sample rate is at 44100 Hz, this would mean a single buffer is 88200 samples long (a single measure at 120 bpm lasts 2 seconds). Let's create a terribly interesting "rhythm" by having the snare play on the second beat of the bar (instead of simultaneously with the kick) on the first. Simply set the Event position like this:
snareEvent->setEventStart( 22050 );
snareEvent->setEventEnd( snareEvent->getEventStart() + snareEvent->getSampleLength() );22050 samples is the offset of the second beat in 120 bpm bar at 44.1 kHz. We update the sample end position (after which the sample is "cut off" and won't continue playing) by adding the sampleLength to the new sampleStart offset. That's it.
Note you can change AudioEvent-offsets while playing as the Sequencer is responsible for feeding appropriate events at the given playback position to the engine.
For these above calculations you can use the BufferUtility which translates the concept of time and musical paradigms to buffer samples.
Will do. Please continue to : creating a simple drum machine.