The Official Sequential/DSI Forum

Voice Component Modeling with the Prophet Rev2

Voice Component Modeling with the Prophet Rev2
« on: March 13, 2019, 09:32:25 AM »
I found a way to better capture the character of classic VCO analog poly synths with the Prophet Rev2.   It's not a magic bullet, and it can't make a Curtis filter sound like a Moog ladder, but it can produce some really organic, warm, lush patches that sound a bit more authentic.

This method can also be used to model real world analog/acoustic instruments more realistically  (woodwinds, brass, strings, etc).   

Voice Component Modeling:

http://www.VoiceComponentModeling.com/vcm.aspx

It's focused on virtual voice-by-voice modeling down to the component level, and allows a high level of definition to polyphonic sounds.   

Much of the character of a given instrument or ensemble comes from small imperfections on a voice-by-voice basis.   This method allows you to virtually allocate a voice count (ie: 8-voices CS80,  6-voices MemoryMoog, etc...), and dial in specific characteristics on a per-voice basis.

If you have poly-voice patches using a decent amount of Osc Slop, you can probably use this method to improve their character.   Osc Slop is a tool that can give your Rev2 patches some voice definition, but it does so in a random nature (multiple free running LFOs), with a lot of exaggerated tuning motion per oscillator, especially once you get into medium to higher values.

I've found that in patches where I used Osc Slop for voice-definition before, I now cut the Osc Slop down by 80% or so, and use this voice-modeling method to produce the majority of voice-by-voice character... then I may sprinkle just a bit of Osc Slop back in, for just a tiny bit of randomness and tuning motion.

In addition to oscillator tuning, you can target any other mod destination or macro with per-voice offsets... allowing you to create poly-voice instruments full of character (minor imperfections from voice-to-voice).

Check out the web page I set up for a bunch more details and a full write up.   I'm working on a bank of sounds now.   In the meantime, if you want some sample patches / VCM templates, send me your email and I'll send you a mini-bank with some examples, and templates for 4-voice, 5-voice, 6-voice and 8-voice patches with all the modulation wiring set up.

http://www.VoiceComponentModeling.com/vcm.aspx

Part 3 addresses Prophet Rev2 implementation and Part 4 gives and overview of the benefits.


Re: Voice Component Modeling with the Prophet Rev2
« Reply #1 on: March 13, 2019, 02:36:28 PM »
Great job on your website!  I can see you put a lot of research into this.  This is one of those discoveries that you look back on and think "of course, it makes so much sense in hindsight".

I've never worked with the gated sequencer with the Rev2, guess I'll have to look into it now.  I just requested access to your templates via email.

It's interesting to note that you scoured posts on the characteristics of various classics -- the voice allocation of Jupiter 8s, the voicecard tuning on Memorymoogs, etc.  I'm particularly curious to find out what kinds of envelopes old classics used.  Did the SEM modules have logarithmic curves?  Were the Rolands more linear?  Were the attack and decay stages different from each other?  Did envelope amount affect those curves?

Do you have any links that might address those issues?  It's a theme that was brought up often in the Moog One thread over at GS.  Thanks.
Moog One <> Prophet Rev2 16Voice <>  Kronos 61 <> Andromeda <> Integra 7 <> Behringer Model D <> Minitaur <> Slim Phatty <> Matrix 1000 <>  Micron <> Privia PX-5S <> Beat Buddy <> Perform VK/VE <> FCB1010

Re: Voice Component Modeling with the Prophet Rev2
« Reply #2 on: March 13, 2019, 03:00:21 PM »
I found a way to better capture the character of classic VCO analog poly synths with the Prophet Rev2.   It's not a magic bullet, and it can't make a Curtis filter sound like a Moog ladder, but it can produce some really organic, warm, lush patches that sound a bit more authentic.

This method can also be used to model real world analog/acoustic instruments more realistically  (woodwinds, brass, strings, etc).   

Voice Component Modeling:

http://www.VoiceComponentModeling.com/vcm.aspx

It's focused on virtual voice-by-voice modeling down to the component level, and allows a high level of definition to polyphonic sounds.   

Much of the character of a given instrument or ensemble comes from small imperfections on a voice-by-voice basis.   This method allows you to virtually allocate a voice count (ie: 8-voices CS80,  6-voices MemoryMoog, etc...), and dial in specific characteristics on a per-voice basis.

If you have poly-voice patches using a decent amount of Osc Slop, you can probably use this method to improve their character.   Osc Slop is a tool that can give your Rev2 patches some voice definition, but it does so in a random nature (multiple free running LFOs), with a lot of exaggerated tuning motion per oscillator, especially once you get into medium to higher values.

I've found that in patches where I used Osc Slop for voice-definition before, I now cut the Osc Slop down by 80% or so, and use this voice-modeling method to produce the majority of voice-by-voice character... then I may sprinkle just a bit of Osc Slop back in, for just a tiny bit of randomness and tuning motion.

In addition to oscillator tuning, you can target any other mod destination or macro with per-voice offsets... allowing you to create poly-voice instruments full of character (minor imperfections from voice-to-voice).

Check out the web page I set up for a bunch more details and a full write up.   I'm working on a bank of sounds now.   In the meantime, if you want some sample patches / VCM templates, send me your email and I'll send you a mini-bank with some examples, and templates for 4-voice, 5-voice, 6-voice and 8-voice patches with all the modulation wiring set up.

http://www.VoiceComponentModeling.com/vcm.aspx

Part 3 addresses Prophet Rev2 implementation and Part 4 gives and overview of the benefits.

Great job and an interesting read.

You mentioned about the current lack of ability to mimic voice stealing behaviour. Could that not be done by defeating voices on the synth? (I know the P6/OB-6 have that feature and presume that Rev 2 would too.) That way, of modelling a P5, you can simply defeat the sixth and higher voices. You’d have to power cycle the synth to recover the voices so it’s not an ideal solution but could be a workaround.

Re: Voice Component Modeling with the Prophet Rev2
« Reply #3 on: March 13, 2019, 04:24:34 PM »
Extremely well documented write up I am seriously impressed. I sent you an email.

maxter

  • **
  • 121
Re: Voice Component Modeling with the Prophet Rev2
« Reply #4 on: March 13, 2019, 05:30:56 PM »
"of course, it makes so much sense in hindsight".

My exact feeling. Having used the gated sequencers quite a bit, mostly for complex melodic and rhythmic variations, I feel like I should've realized this much sooner, especially after creativespirals other, initial thread on this subject. How did I not think of using the Key Step gate-mode this way?

I will probably run with this in the same direction as when using the gated sequencers for a melodical/rhytmic context, that is combining sequencers of various lengths to build patches that don't repeat very often. I like going with 12,13,14,15 steps or 11,12,13,14, generally (avoiding 16 especially with rhythmic patterns, as it's 4 squared (which in turn is 2 squared) the most fundamental and repetitive rhythm). Taking a long time until a repeated behavior, while avoiding the 2/4/8/16. Having 4 sequencers modulating different and/or the same parameters, with different lenghts (loop points), will create results in a sense "beyond" voice allocation VCO mimicking (though VCO mimicking is more than enough). Imo DCOs should reign supreme, in this respect, from here on (if there are enough of the right modulation possibilities). Like having 15 VCOs routed through 14 filters, each repeating cyclically, for instance. Or, for example, seq1 routed + to osc1 and - to osc2, and vice versa with seq2 at different lenghts (via the LFOs in this case, at least for now  ;) ). I really hope SCI will implement a "fine tune" mod parameter eventually, to save up on the mod slots. Using 4 gated sequencers with at least 12 possible modulation destinations should be enough for some good stuff then. What I mean is this: mimicking VCOs behavior, breaking this behavior down to a couple of parameters that can be digitally controlled by LFOs mainly (as creativespiral has so excellently done), the next step would be to take this concept to the next level. Ie we could mimic the VCO character but make it even less predictable/stable than a VCO (though it WOULD actually be predictable, in the true sense), as there are more variations per tone going on simultaneously than a VCO, and each amount/depth is CONTROLLABLE (unlike VCOs)... Ie to take the studied characteristics of the behavior of the VCO, and expand on and/or modulate them further. Since, I believe, creativespiral has nailed the behavior and how to recreate it, it is easily multiplied and/or divided and modulated, to different destinations.

I will probably make a template patch for this, so that I've got the 4 sequencers already set up, ready to go, just routing them to different destinations for each patch. And probably some of the mod matrix set up as well.

Also, the BPM knob may find a whole new purpose to me now, when using it in this context! Not to forget switching between gate modes can sometimes be useful too, especially when midi synced to an external sequencer.

Loads of fun to come! Huge thank you for your solid and thorough work, creativespiral! You ought to be hired by [insert synth brand here] to design the one DCO synth to rule them all. I know I'd buy it at least!

Re: Voice Component Modeling with the Prophet Rev2
« Reply #5 on: March 13, 2019, 10:32:29 PM »
Great job on your website!  I can see you put a lot of research into this.  This is one of those discoveries that you look back on and think "of course, it makes so much sense in hindsight".

I've never worked with the gated sequencer with the Rev2, guess I'll have to look into it now.  I just requested access to your templates via email.

It's interesting to note that you scoured posts on the characteristics of various classics -- the voice allocation of Jupiter 8s, the voicecard tuning on Memorymoogs, etc.  I'm particularly curious to find out what kinds of envelopes old classics used.  Did the SEM modules have logarithmic curves?  Were the Rolands more linear?  Were the attack and decay stages different from each other?  Did envelope amount affect those curves?

Do you have any links that might address those issues?  It's a theme that was brought up often in the Moog One thread over at GS.  Thanks.

Thanks!   I haven't compiled details on envelopes of old synths... most of my research was focused on osc tuning tendencies and voice allocation.   This page on Wikipedia does have a good list of Curtis (CEM) and Solid State Micro (SSM) integrated circuits (IC chips) -- with a decent amount of details on which synths used the chips... so it's a good starting point to look up further details.   For instance, Prophet-5 Rev1/2 used SSM2050 Envelopes, but the Rev3 used CEM3310s.  Next step would be to lookup those chip numbers and try and find specific measurements on timing/curves/behavior.

https://en.wikipedia.org/wiki/CEM_and_SSM_chips_in_synthesizers

I know you mentioned an interest in details of Recursive Envelope Modulation on the Rev... that is something I've done some testing on and plan to add to the Appendix F thread at some point... I have a few topics that are partially documented...  Just need to spend some additional time to write up the specifics, take screen grabs, etc..   Between work and fam, it has been hard for me to block out a lot of music and sound design time lately.





Re: Voice Component Modeling with the Prophet Rev2
« Reply #6 on: March 13, 2019, 10:44:56 PM »
Great job and an interesting read.

You mentioned about the current lack of ability to mimic voice stealing behaviour. Could that not be done by defeating voices on the synth? (I know the P6/OB-6 have that feature and presume that Rev 2 would too.) That way, of modelling a P5, you can simply defeat the sixth and higher voices. You’d have to power cycle the synth to recover the voices so it’s not an ideal solution but could be a workaround.

Thanks Quatschmacher.   I don't think the Rev2 has a way to defeat voices... That may be a VCO only "feature", in case one of your voices is dead or can't be tuned within a reasonable threshold.   

We get sort of the best of both worlds with virtual voice modeling... we can define whatever voice count we want and won't experience voice stealing (unless physical polyphony on board is exceeded)... if the virtual voice count is exceeded, the new voice will just copy the value of the oldest/next voice in line.   Of course if we truly wanted that extra realism of voice stealing, it could possibly be added with future manufacturer developments... though I'd like to see a "Key Step, Reset" and "Key Step, Backtrack" implemented before that... as mentioned in last section of the website I setup - that would allow modeling the Jupiter-8 voice allocation scheme, and be better for acoustic ensemble modeling. 

Re: Voice Component Modeling with the Prophet Rev2
« Reply #7 on: March 13, 2019, 10:50:19 PM »
I will probably make a template patch for this, so that I've got the 4 sequencers already set up, ready to go, just routing them to different destinations for each patch. And probably some of the mod matrix set up as well.

Loads of fun to come! Huge thank you for your solid and thorough work, creativespiral! You ought to be hired by [insert synth brand here] to design the one DCO synth to rule them all. I know I'd buy it at least!

Thanks Maxter! - excited to see and hear examples of what people make.   As for templates, I have some .syx files I can send you... or if you wanna build them yourself, its not that hard... just have to keep scaling / offsets in mind.

Re: Voice Component Modeling with the Prophet Rev2
« Reply #8 on: March 13, 2019, 11:04:29 PM »
The website has details for the Rev2 setup, but here's a step-by-step list to refer to if wiring this up yourself for an Init Patch:

1. Set Osc1 and Osc2 fine tuning to -31. 
This offset will allow a center-point to work from in Gated Seq setup.  Set Osc Mix to middle.

2. Setup the Gated Sequencer

a. Set Gated Seq to "Key Step" Mode

b. Gated Seq Lane 1: Will be routed/scaled to Osc 1 Freq via Mod Matrix
Set a reset at step seven (for a six voice emulation), and then dial in some random values between ~54-70 for the six steps (with 62 being centerpoint)... Don't set Gated Seq Destination... leave it to destination: none.   

c. Gated Seq Lane 2: Will be routed/scaled to Osc 2 Freq via Mod Matrix
Set a reset at the same step as sequence 1, and then dial in similar values here ~54-70... for per voice fine tuning.  Again, don't set destination here, just the per voice values and the reset on step seven.

Note:  Each incremented value will equal ~0.38 cents change to fine tuning when scaled (ie: +3 will equal a little over 1 cent sharp, and -3 would be a little under 1 cent flat.   We're using 62 as the centerpoint for this setup...  Above, we set Osc1 and Osc2 fine tuning to -31 to compensate for the centerpoint.   Value of 62 means "perfectly in tune"

3. Setup the Mod Matrix for Fine Tune Scaling of Freq:

a. Mod Matrix 1: 
Set source to Seq 1, Set destination to Osc Freq 1, Set amount to 1.

b. Mod Matrix 2: 
Set source to Seq 2, Set destination to Osc Freq 2, Set amount to 1.

Note:   The value that is passed on to the Osc Frequency is going to be a fraction of 1.   It will be the number set in the Gated Sequencer (62 for example), divided by the Max Gated Sequencer Value (125), multiplied by the Mod Matrix amount (1).   So if the value of the Gated Sequencer step is 62, we're sending on 62/125*1 = 0.496     In step one, we set offsets of -31 fine tuning for each oscillator, so the net effect of a value of 62 should cancel out the offset and we should be perfectly tuned.   

Alright, now test things out!
If its wired up correctly, then every key press should advance the Gated Sequencer by one Key Step, giving you a sort of virtual six voice instrument.   Each step of the sequence will have unique values for Osc1 and Osc2 fine tuning, giving you a slightly different detuned character per-voice.  It gives a more organic feeling, since each voice (and each osc) has unique tuning imperfections. 

Try holding down a chord, and you'll notice the natural motion/phasing associated with each oscillator having slightly different tuning.  If you tried to achieve the same sort of per-voice character using Osc Slop, you would have a bunch of randomness, and exaggerated/artificial tuning motion added onto the more natural wave motion of oscillators that have more stable tuning offsets...

If it's too wild sounding, try dialing back the Gated Seq values closer to 62 per step.  If you want more character on a per-voice basis, scale the values out further away from 62.  Remember, every 3 increments away from 62 will equal about 1 cent... so if you want lots of character, you can push tuning per osc pretty far.  If you push it too far it may sound like crap, but if you spend some time dialing in and testing values, you can get into some untamed Memory Moog type territory. 

Note:   For each step, you may want to keep the values of the #1 and #2 sequence steps somewhat close to each other, while still having slight variations.   This emulates a situation where the voice as a whole has a sharp or flat tendency, yet the oscillators still have slight offsets.   For instance, for Gated Seq 1 and 2, you might want values semi-grouped like this:
Seq 1: 61, 69, 73, 54, 71, 63
Seq 2: 64, 62, 76, 57, 67, 62

If you're willing to sacrifice two more mod slots, you can try an alternate per-voice detuning method -- modeling intonation offsets (which are very common in VCOs).  In this case, Route Mod Matrix 1 Destination to another Mod Matrix slot with a value of 1 (or -1) (ie: Dest: Matrix 3 Amount)   Now for Matrix 3, set source to Note Num, destination to Osc Freq 1 and amount to 1.   Now you have per-voice intonation offset for Osc Frequency.  You'll want to scale all the Gated sequencer value way down (between 0-12 per step maybe)... and you may need to adjust main Osc Fine Tuning to compensate for the intonation tuning...  ie: dial it in so around middle of keyboard things are close to perfect tuning, but as you get into low or high registers, then voices with higher values will have a sharp of flat tendancy (depending on whether you chose Mod Matrix 1 value of 1 or -1)

Tons of other options to try for virtual voice setups with per-voice behavior for a variety of destinations in the Osc section, but also you can target VCF, VCA, and Envelope behavior, and by using Mod Matrix scaling, you can get fine tuned variance when needed.
« Last Edit: March 13, 2019, 11:42:05 PM by creativespiral »

Re: Voice Component Modeling with the Prophet Rev2
« Reply #9 on: March 14, 2019, 03:16:11 AM »
Thanks a lot for this!! I have followed the steps above and AB-compared the oscillator-sound (saw) with my P6. I found that the movement caused by detuning sounds very similar. The Rev2 sounds a bit brighter though, but by turning the cutoff to about 157 I think the frequency balance sounds pretty close, too (only set by ear, I have not studied the frequency curves with a spectrum analyzer).

maxter

  • **
  • 121
Re: Voice Component Modeling with the Prophet Rev2
« Reply #10 on: March 14, 2019, 04:59:21 AM »
Yes, intonation offsets will be great! I have to try this. I think this plays a major part in the VCO sound, and I'm a big fan of "stretching" the tuning on the upper and lower registers, like on a piano. I believe this is why some say the Boog is sterile, or cold, compared to the Moog, being that it's "too" well intonated. I'd suggest to those people to detune the octave intonation by just a bit, and see if that would make the difference. Another reason for hopefully getting an added fine tune mod destination, I feel like I will probably run out of mod destinations FAST by having to scale all the tuning mods with the sequencers. And a direct mod of note number - osc freq at a value of 1 is too much for me. Finer modulation would be great on many parameters, not just osc tune.

More and more thinking about that scaling modulation capability, multiplying and dividing, on the matrix that you've suggested on the request thread. It makes a ton of sense.

Re: Voice Component Modeling with the Prophet Rev2
« Reply #11 on: March 14, 2019, 10:48:31 AM »
Thanks a lot for this!! I have followed the steps above and AB-compared the oscillator-sound (saw) with my P6. I found that the movement caused by detuning sounds very similar. The Rev2 sounds a bit brighter though, but by turning the cutoff to about 157 I think the frequency balance sounds pretty close, too (only set by ear, I have not studied the frequency curves with a spectrum analyzer).

Thanks Osflaa, good to hear...Yeah, as mentioned in the article, its not a magic bullet to perfectly copy other VCO synths.    There will still be differences due to variations in VCF, VCA, Envelope circuit implementation and other specifics about electrical design.   But, if we were at 90% of the way there before, this may get you to 95% of the character now.   It definitely gives some more organic feel, without the artificial motion associated with Osc Slop or LFOs.   You could try out the VCO harmonic jitter modulation that I've been playing around with too.  That might get you a tiny bit closer to P6. 

https://www.youtube.com/watch?v=Amhl07TVdNM

For voice modeling, I'm as excited about real world analog/acoustic instrument modeling as emulating VCOs...  I've been working on some string, woodwind, and brass patches that just feel full of life, with these minor voice imperfections.     
« Last Edit: March 14, 2019, 10:57:54 AM by creativespiral »

panic

Re: Voice Component Modeling with the Prophet Rev2
« Reply #12 on: March 15, 2019, 07:05:17 AM »
Spiral, a creative one you are indeed. I cannot try it out for some time, but there is something I have some doubts about:

If its wired up correctly, then every key press should advance the Gated Sequencer by one Key Step, giving you a sort of virtual six voice instrument.   Each step of the sequence will have unique values for Osc1 and Osc2 fine tuning, giving you a slightly different detuned character per-voice.  It gives a more organic feeling, since each voice (and each osc) has unique tuning imperfections. 

Try holding down a chord, and you'll notice the natural motion/phasing associated with each oscillator having slightly different tuning. 

I never used keystep mode a lot, so perhaps I am wrong, but since the gated sequencer is a per voice modulation source, the keystepping will also happen per voice I think. Meaning that, if you set it up like you describe for six voice, and then you play a six voice chord, I think all six voices will have exactly the same offset: press a first note, it will be on step one of the gated sequence - hold it and press a second note, this second voice will not be on step 2, but also on step one. Of course, after some random playing, not all voices will be one the same step anymore, and you will have per voice variation (but not in the exact way you want, not in a controlable six voice variation way). In unison, yes, it will work as you want, but you can't play chords.
« Last Edit: March 15, 2019, 07:07:31 AM by panic »

maxter

  • **
  • 121
Re: Voice Component Modeling with the Prophet Rev2
« Reply #13 on: March 15, 2019, 07:28:59 AM »
That's not the way key step works. Each pressed key, or incoming midi note, does indeed advance the sequencers one step. Which enables running external sequencers through it and altering/offsetting those sequences in different ways with the sequencers, each note advancing the sequencers one step. It's possible to make some semi-"generative" melodies this way.

EDIT: I may be wrong, just tried this out and got different results than expected after triggering more than one voice simultaneously. It seems that you're right on this one, panic. I'd like to test some more, but don't have the time right now.
« Last Edit: March 15, 2019, 07:47:30 AM by maxter »

panic

Re: Voice Component Modeling with the Prophet Rev2
« Reply #14 on: March 15, 2019, 07:55:23 AM »
Maxter, so you say it is not how I describe? so that would mean that each keypress moves the gated sequencer for every voice at once? Not what I expected/remembered, but always glad to be proven wrong. So what happens when you press one note and hold it, then press a second one? Voice 2 goes to step 2, but voice one doesn't move and stay one step one? Seems difficult to implement, kudos to sequential.

Edit: didn't see your edit, and reading back my post it sounds a bit sarcastic, which was not my intent, I was sincerely surprised.
« Last Edit: March 15, 2019, 08:12:40 AM by panic »

maxter

  • **
  • 121
Re: Voice Component Modeling with the Prophet Rev2
« Reply #15 on: March 15, 2019, 08:25:22 AM »
Well, had time to do just a little more testing. It does indeed work the way creativespiral intended. If you switch mode and then back, the sequencers get off-set somehow.

I can't get my head around how the sequencers are set up right now. If setting the mode to something else, triggering a few notes, back to key-step, i get mostly repeating patterns of 8x3, for sequencers of 6 steps...

EDIT: It's just my 8 voices cycling, with different voice sequencers off-set by being triggered in a different mode and then stopped on different steps. Then continuing their respective cycle in key-step mode. The voices come in different order 3 times, repeating after 3 8-voice cycles = 4 cycles of 6-step sequencers.
« Last Edit: March 15, 2019, 09:09:58 AM by maxter »

Re: Voice Component Modeling with the Prophet Rev2
« Reply #16 on: March 15, 2019, 08:50:06 AM »
The gated seq seems to work as suggested for poly sequencing.

The only slight catch for the purpose of the VCM efforts would be that the gated seq doesn't care to skip over a certain step if you holding that note elsewhere, making it not a perfectly cyclical voice model but honestly this is pretty trivial.

https://drive.google.com/open?id=1Uptmn--_I2OgTyvqpjy828mUybizdX3X

Here is an example of the gated seq in its simplest form. I have a self osc filter that isn't keytracked (every note is the same) and set the gated seq up to raise the pitch every key press. I try to demo some it with some different behaviors of playing.



« Last Edit: March 15, 2019, 08:51:39 AM by philroyjenkins »

Re: Voice Component Modeling with the Prophet Rev2
« Reply #17 on: March 16, 2019, 06:41:08 AM »
Really nice work! But what is the difference here using this method instead of using one or two LFO's with destination OSC SLOP? Set LFO1 to Triangle and LFO to Random both having OSC SLOP as DESTINATION.

Re: Voice Component Modeling with the Prophet Rev2
« Reply #18 on: March 17, 2019, 02:15:38 AM »
Really nice work! But what is the difference here using this method instead of using one or two LFO's with destination OSC SLOP? Set LFO1 to Triangle and LFO to Random both having OSC SLOP as DESTINATION.

It is explained in a very detailed way in the article that creative spiral wrote and linked to in the first post. To briefly summarise, slop applies a randomised detuning effect whereas the gated sequencer method yields a fixed offset which repeats cyclically. Furthermore it allows precise control of the offsets (of multiple parameters) on up to 16 “virtual voices”. For example, in a 5-virtual-voice setup, keypress 1, 6, 11 etc. will have the same offset values as each other. Keypress 2, 7, 12 etc. will have the same offset as each other. Likewise for the other “voices”.

panic

Re: Voice Component Modeling with the Prophet Rev2
« Reply #19 on: March 19, 2019, 08:04:48 AM »
Thanks maxter and philroy for the testing!  It confirms what I thought. The trick with the keystep sequencer is still a nice one if you are looking for tuning offsets between your voices (best way to achieve it is programming all the steps of the sequencer, and then do some random playing with more than one note at once), it just doesn’t go as far as creativespiral had hoped in mimicking certain behaviors as voice count or predictability of results.
In the past, to create minimal random offsets between the voices and liven things up, I used to assign very slow keysynced random LFOs in minimal amounts to various destinations. I was really glad with the effects. Until someone on the forum said it was such a shame that the random LFO is not really random, but spits out the same seemingly random pattern every time… I guess my brain was just too willing to hear a result which wasn’t there.