Device Details

Device Overview

Name/Version: Coalescence 1.0
Author: ndivuyo  
Description: Coalescence is a neural concatenative multi-sampler that uses machine learning to organize similar sample slices into clusters based on a chosen spectral feature. Three playback modes take advantage of these clusters in various ways: Point, Rings, and Paths. Additionally, external audio input can be routed for modulations and to control what audio slices play based on their similarity. All of these combined with a robust modulation system makes Coalescence one beast of a compact sampler for many different situations!

Teaser:
https://www.youtube.com/watch?v=MvL1mXXch1Y

Walkthrough:
https://www.youtube.com/watch?v=yloQGXstkWk


Features:

- Supports dropping of multiple samples, individual or folders (up to 2000), for concatenative sampling. Each sample has individual parameters for pitch, volume, direction and transient sensitivity

- Neural Network (SOM) that organizes sample slices based on a chosen spectral feature and visualizes them in a 2D circle

Three playback modes:
- Point: choose a single sample slice point to play from, MIDI repitches playback. Closest to a classic sampler. Option for external audio input to control lookup point based on similarity
- Rings: MIDI pitches trigger circlular ranges called Rings, when a Ring is triggered it plays a random sample within its range. Great for drum kits, or any kind of sample slicing. There are auto and manual Ring creation options
- Paths: MIDI pitches trigger individual playback paths that can glide or jump through the sample slices. Great for creating sequences or all kinds of movement! Alternatively there is a Single path mode in which only one path can be triggered and MIDI pitches instead repitch the playback

Various settings for the network including ones for training and previewing the network:
- 4 different spectral features to choose for clustering the sample slices: (Chroma) describes the slice in terms of a 24 step chromatic scale, good for sorting based on tonality. (Mel and Bark) these two describe the slice with intensities from low to high frequencies based on psychoacoustic perceptions of equal distant steps. Each scales the frequencies differently, good for anything you want sorted based on low to high frequencies such as percussion, etc. (Speech) uses Mel Cepstral Coefficients (MFCC) which are commonly used for speech recognition, good for sorting vowels, etc.
- Option to use transients only for the sample slices fed into the network or using every spectral frame
- Cluster radius size and various training parameters

Various sample playback settings:
-Standard playback settings such as voices, playback direction, one shot or loop, loop size (can be in time, beats, or by transient length), pitch, and fade window
- Various parameters for the different playback modes and sample slice lookup
- A phase vocoder playback mode where you can time stretch. It also has a spectral attack and release for cheap blur effects or fading between specra

External audio input routing with various uses:
- Option to have the sampler voices triggered by the input transient detector instead of by MIDI notes. This way you can trigger voices of the sampler with an external input!
- As mentioned the input can optionally control what sample slice is playing based on its similarity to the input at any moment. This opens up the doors for a pseudo style-transfer and other effects (Note: they are not the cleanest and most robust results but great for experimentation and can work well if dialed-in and handled correctly). Some examples of uses: beat boxing with voice to trigger drum sounds, having one sound 'mask' another such as style-transfer-esk effect (using the phase vocoder playback mode), creating a voice controlled synth, etc
- An envelope follower and pitch detector which can be used for modulation

A modulation system, each modulation source has two mappable destinations:
- Two LFO's with perlin noise options
- Two envelopes
- Two random spray values created at the beginning of each voice
- The routed external audio input's envelope follower and pitch detector
- Standard MIDI sources: velocity, key pitch, aftertouch, pitch bend, mod wheel

Per voice filter with a few modes:
- Standard simple biquad filter with standard shapes
- Ladder filter mode
- Vowel or Formant filter mode with formant options, frequecy shifting, spreading, and bandwidth sloping

NOTE OF TRANSPARENCY AND LIMITATIONS:
- I do not claim this device to be the most robust neural sample library organizer, so if you are thinking to try and overload the device by dropping in many hundreds of mid-sized samples or very long samples simultaneously you will certainly hit limits. That said, it still analyzes and trains samples fast, details below
- Dropping in say a hundred or two one shot drum samples will be analyzed and trained by the network fast, but if they are hundreds of loops over 10 seconds in length each, that may take some time depending on the details
- Dropping in individual samples up to about thirty seconds is fast (analyzes instantly for short samples and about 1-2 seconds up to thirty depending on samplerate), over that you may wait a few seconds or way longer depending how much longer. If you drop in a thirty minute sample you will be waiting a while (samplerate matters too)
- An imporant note is that although the device can handle up to 2000 samples, the neural network only has 2500 slots for holding sample slices, so unless the samples are one-shot percussion samples you will fill up the network loooooong before you reach 2000 samples. Therefore this device is meant more to work with a handful of samples, or many one-shot samples than an entire library
- Similarly, the neural network training time will vary a lot depending on how much sample data there is, and whether you are only sending in transients or every spectral frame. Sending in transients only is generally very fast but it depends on how much sample content there is
- The good news is that the nerual network is saved in the device instance or preset, so if you do train a large amount of sample data, you won't have to again. Similarly for the sample analysis there is an option to save the analysis files so if you want to load a preset or long sample instantly you can have the analysis file saved


dillonbastan.com
dillonbastanstore@gmail.com
 

Device Details

Tags sampler, other
Live Version Used: 10.1.18
Max Version Used: 8.1.5
Date Added: Dec 08 2023 09:14:37
Date Last Updated: Not updated yet
Device Type: instrument_device
Download URL (report link) https://dillonbastan.com/store/maxforlive/index.php?product=coalescence
License (more info): Commercial


Comments

WILDLY INNOVATIVE! I was like "This sounds like something Dillon Bastan would do..." then I scrolled down to the bottom... AHA! (Didn't recognize your username here at first.) Whelp, color me curious and compelled by the attached animated gif, this trailer is all kinds of universe-glimpsing awesome and I'm off to learn more ?

Can it embed the samples with the Ableton Project too? To be self-contained.

Thanks! I believe so.. basically, if you store the samples in the User Library it will remember that relative path so it just has to match that relative path. And that also only works if the file is installed (placed) in the correct relative path in the user library (at least for the presets to load correctly. As for the sample loading to work, I think it only has to be placed somewhere in the User Library, but I could be wrong).

Let me know if you have issues dillonbastan@gmail


whoa!:) bigups Dillon!
way more fun than SkatArt!:)
( and way cheaper )

Looks amazing!

Login to comment on this device.

[ browse device library ]