asked Feb 06 at 16:32 by former (277)


I think I scratch the limits of the modularity concept- let's say I have neat small functionality encapsulated in an FX Grid- a.e the Step sequencer- and want to modulate now a parameter from a polysynth with it that is not the audio output. The FX Grid (containing the step sequencer) can only modulate it's own modulators..) Typically, I would add the FX grid in the FX slot from the polysynth..but then it can't change any of the knobs from the synth itself, only the audio stream coming out of the synth. For certain modulations, there are workarounds- the audio chain modulator to receive audio from other signals, some of the note devices as the note receiver that overcome this limitation-but just for specific types of modulations. but lets say- I want to change the cut-off from the do I do that ? When I build a chain and then add the polysynth and my homebrew device (make them siblings)- will that be sufficient? or is there way to use a knob on the chain as bridge to get to the polysynth cut-off ?

There are probably good reasons for this, but this is a harsh limit of the modularity concept, and the biggest obstacle for building a library of "modules" for all kind of uses..would love to get around that. Otherwise, we build "grids" for anything and get this big overloaded grids containing the hole world.

best regards, Former

Let a Grid produce the modulation signal as its audio output. To the right of this Grid, place the Polysynth and add an Audio Rate modulation source, then apply it to filter cutoff or anything else you want. Audio Sidechain should also work. Note that this technique is essentially transforming the Grid audio output into a modulation signal so you won't hear the Grid output. That's fine! Just because it's called "Audio Output" doesn't mean the signal needs to be audible; it's just data.

A limitation of this approach is that the Grid only produces one signal, its output.

EDIT: As former's answer points out, if you can nest the modulation target devices in the Grid's FX chain, you can export any Grid signal as modulation for those nested devices. The audio signal technique above can be used for "remote" modulation anywhere, but is generally more limited.


answered Feb 06 at 22:37 by voidshine (302)

edited Feb 08 at 03:03

To clarify, since my screenshot shows the polysynth nested -- nesting isn't necessary, but it works too. Another downside to my solution is that it doesn't seem to work per-voice; all notes are getting the same modulation. It's an interesting puzzle and I'll be happy if someone has a better answer.

  — (Feb 07 at 02:37) voidshine

Hi Voidshine, Cool! How the heck did you had the idea to use the audio exit?!? Cool hack! I would never have come to that idea.. To conclude: modulators (also from grid) work only top-down in hierarchy of devices and one of the few workarounds known is the Audio rate (or for midi- the note receiver device). I realized, I actually could add the polysynth in the FX Slot from the FX device, odly enough- then no audio rate is needed, but this solution is quickly getting very crowded (A deviced nested in a device in a device..) with multiple devices. Yours is better with that, but also limited- I see we can have a some modularity in Bitwig- I attach an example with a "curve painter" FX device - an idea from another thread here- modulating the cut-off from a polysynth.

If I have multiple smaller functions I would probably put them together in 1 FX Grid (or polygrid), with a dedicated modulator per function - and then nest the polysynth in the FX slot, and like this, can module multiple parameters differently. Will try that.

Best regards, Former


answered Feb 07 at 13:43 by former (277)

Oh yeah, awesome! So you CAN use inner Grid modules to modulate devices nested within the Grid's FX chain. I didn't know those signals could escape the Grid, but it makes sense because, during Grid processing, those FX devices haven't been computed yet -- the nested Polysynth is "downstream", and the Grid "owns" it, so it's accessible for modulation. This makes me appreciate Bitwig even more, actually.

  — (Feb 08 at 02:50) voidshine

I wouldn't expect the reverse architecture (Grid nested in Polysynth's FX chain) to allow Grid modulation of the "parent". Why not? Because the Polysynth's signal is already computed by the time its FX chain runs -- it is the signal being sent to the FX chain, so anything happening in that nested Grid can't be expected to change the past. If you use the Audio Rate technique from my answer, sure you can get modulation anywhere, but it might be slightly delayed compared to the nested device direct modulation. Upvoting here, because it's a richer answer than mine, thanks!

  — (Feb 08 at 02:52) voidshine

On second thought, I wonder if there really is any delay in my technique. Probably, Bitwig intelligently orders audio processing so that Audio Rate receivers run after their sources. How'd I come up with the idea? When I was researching Bitwig, I was impressed by some demo that said you could basically plug anything into anything and things "probably" wouldn't explode. It's one reason I jumped aboard -- the freedom! :)

  — (Feb 08 at 03:13) voidshine

Hi Voidshine, Now we need just a good use case for this..???? I am fiddling around with a combined „curve painter“/step sequencer for the cut off And an decent aftertouch effect..until now result is modest..but with this flexibilty, Many things should be possible I looped in also a wavetable but it sounds crap..difficult to master that, Regards Former


answered Feb 09 at 13:48 by former (277)

Hi Voidshine, on the tricky question what comes first in the routing..

here's an example where someone nested a synth in an FX grid (in the PRE FX area?!?) and then looped the signal into the grid via an audio rate module, processed it there (here: split left/right audio) and then got out via two modulators..and to get it back to the track, used two DC Offset devices as receiver of this modulations..for further processing with FX Chains per audio channel left or right..Cool variation of our topic.- The grid here contains not only the control logic (that can happen before the synth is executed) but also the FX routing logic (that has to happen after the synth is executed and audio generated- like this, the FX grid can act as "umbrella" for the synth device..not sure if this would work out also for very time critical usages like Beats..but anyway, this allows to get the Grid "wrapped" around any synth as long as enough CPU is there- Groovy!

Regards, Former


answered Feb 13 at 13:38 by former (277)

AH..there's limits to the "PRE FX" Slot of the Grid devices: It seems some none Bitwig Synth doesn't let pass the midi signals, according this thread: probably, the synth needs to have an ability to "pass through" midi, otherwise, no midi arrives in the grid..makes sense, in a way.. I guess in the grid I could get around that by adding a "pitch in" module..but how to pass that to an exit ? there is no "pitch out", just modulations available.. Regards, Former


answered Feb 13 at 14:07 by former (277)

This got me curious about timing, so I tested and found the expected sensible behavior. Drop a Phase-4 into the Pre-FX of the FX Grid and set red oscillator phase to 90 degrees so it'll instantly pop a high signal when played. Go into the Grid and set a Gate signal to modulate the Phase-4's output all the way down. You will see that playing notes still produces the tiny little pop for each note. That's because the modulation runs on the next frame. If you move the Phase-4 over to Post-FX, the modulation gets applied before generating the note audio, and so you don't hear the pops when starting notes... you get the pop at end instead, as the note is released. So, long story short: devices "owned" by the Grid are accessible for modulation, and the modulation timing is determined by the position in the signal chain.

Great KVR forum thread, btw, this one:


answered Feb 16 at 07:37 by voidshine (302)

Hi Voidshine,

Thanks for highlighting this! So it makes a difference for a.e drum applications..Also, following the umbrella idea, I have tried two scripts- a step sequencer as envelop shaper and an aftertouch- that should be executed after the synth..but the polygrid is here the parent so its executed first hand..I think for slow pads it doesn't make a difference but for time sensitive short things as drums, it would:

Probably would better need to use an empty Polygrid, then add the synth in the FX Slot, then add an FX Grid and add my scripts there- however, the midi signal might be interrupted by the synth (a.e with an "Massive" synth-but not a problem with polysynth) so I need to retrieve this back- as workaround, I used an "note pitch shifter" in the note FX slot and then in the FX chain an "note receiver" ahead of my scripts to get the midi signal.. This is possible at least as long as the midi signal is not processed somehow by the synth..would be a problem if the synth would contain a.e an arpeggiator.

It's getting a bit complex, to have the midi signal always ready everywhere in the processing chain..but at least, with a decent amount of hacks, it's possible.

best regards, Former


answered Feb 17 at 08:34 by former (277)

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here

Markdown Basics

  • *italic* or __italic__
  • **bold** or __bold__
  • link:[text]( "title")
  • image?![alt text](/path/img.jpg "title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported



Asked: Feb 06 at 16:32

Seen: 223 times

Last updated: Feb 17 at 08:34