Aftertone & Signal
ARTICLES

From Patch Cables to MIDI CC: Modular Thinking in Music Software

What modular synthesis teaches about signal flow, and how to apply it in DAWs through MIDI CC, modular environments like Bitwig's Grid, or visual patching in Max and Pure Data.

Synthesis · January 18, 2026

"I have this sense that I know, and to some extent I have control over, what's going on inside the transistors." — Bob Moog

The Cable as Teacher

Bob Moog was describing something fundamental about the appeal of electronic instruments: the feeling of understanding, at every level, what produces the sound. In modular synthesis, this understanding is made visible. Signal flow is immediate and physical. A patch cable carries voltage from one place to another, and the result is audible the moment the connection is made. There is no ambiguity about what is controlling what. The cable itself is the documentation. This directness shapes how modular synthesists think about sound: not as a collection of presets to be recalled, but as a living network of relationships between sources and destinations.

The DAW environment abstracts this away. MIDI CC values become numbers in a timeline, disconnected from the tactile feedback of hardware. Modulation exists as automation lanes or as assignments buried in plugin menus. The underlying principle remains the same: one parameter influencing another. But the visibility of that relationship disappears. What was once a physical connection becomes an invisible link, easy to forget, difficult to trace.

This abstraction is not inherently negative. It enables complexity that would be unmanageable in hardware. But it also obscures the fundamental insight that modular synthesis teaches: that interesting sound comes from dynamic relationships, not static values. A filter cutoff set to 74 is less compelling than a filter cutoff that responds to how you play. The number matters less than what drives it.

Control Voltage Thinking

The core concept in modular synthesis is control voltage, or CV. An oscillator might generate audio, but it can also generate a slowly-moving voltage that controls something else — a filter's cutoff, another oscillator's pitch, an amplifier's gain. The same signal path that carries sound can carry control information. This interchangeability is what makes modular systems so expressive: anything can modulate anything.

MIDI CC operates on the same principle, just with different constraints. Where CV is continuous and analog, CC is discrete and digital. But the conceptual framework is identical. CC 74 controlling a filter cutoff is the software equivalent of patching an LFO to a VCF's CV input. The modulation source, the destination, and the relationship between them remain the same.

  HARDWARE CV                              SOFTWARE MIDI CC

  ┌─────────┐                              ┌─────────┐
  │   LFO   │                              │ CC Gen  │
  │  ~0.5Hz │                              │ (LFO)   │
  └────┬────┘                              └────┬────┘
       │ CV                                     │ CC 74
       ▼                                        ▼
  ┌─────────┐     ┌─────────┐              ┌─────────┐     ┌─────────┐
  │   VCF   │◀────│   VCO   │              │ Synth   │◀────│  MIDI   │
  │ Cutoff  │     │  Audio  │              │ Plugin  │     │  Notes  │
  └────┬────┘     └─────────┘              └────┬────┘     └─────────┘
       │                                        │
       ▼                                        ▼
     Audio                                    Audio

  Source ─▶ Destination                    Source ─▶ Destination
  (same signal-flow concept)

What differs is the source of that modulation. In hardware modular, you might use an envelope follower that tracks the amplitude of an incoming signal, or a random voltage generator, or a sequencer stepping through values. In software, these same concepts translate directly — envelope followers become analysis algorithms, random generators become noise sources or probability functions, sequencers become step-based patterns locked to tempo.

Same Signal, Different Wire

Modular CV and MIDI CC are expressions of the same idea: one signal controlling another. The difference is not conceptual but practical — how the control signal is generated, transmitted, and received. Understanding this equivalence opens up the full vocabulary of modular thinking to DAW-based production.

What Hardware Teaches

Eurorack users develop certain instincts that serve us well. We learn to think in terms of signal sources and destinations. We learn that a modulation source is only as useful as what it controls, and that a controllable parameter is only as expressive as what drives it. We learn that complex, evolving textures often emerge from simple patches where a few well-chosen modulation routings interact.

These instincts transfer directly to software, but the tools for acting on them are often scattered or nonexistent. A DAW provides automation, but automation is typically drawn by hand — it represents the composer's intent frozen in time, not a living response to musical input. A synthesizer plugin might have internal modulation, but that modulation is locked inside the plugin, unable to affect anything else in the session.

The gap is in the middle layer: tools that generate control signals based on musical context and route them flexibly to destinations. In hardware, this is what utility modules do. Envelope followers, clock dividers, sample-and-hold circuits. These aren't glamorous modules with instant gratification, but they are the connective tissue that makes complex patches possible.

Some software environments have addressed this directly. Bitwig's Grid brings true modular patching into the DAW, with CV-style signal flow between modules that can process audio and control data interchangeably. Max for Live offers even deeper flexibility, allowing users to build custom devices with arbitrary signal routing and logic. These tools represent the most complete translation of modular thinking into software — they preserve the visibility of connections and the freedom to route anything anywhere. For users working in those environments, the concepts discussed here are already native. The question for everyone else is how to capture the same mindset with more conventional tools.

Analysis as Modulation Source

One particularly powerful category of modular utility is the analyzer — modules that observe some aspect of the incoming signal and convert that observation into CV. Envelope followers are the most common example, but the principle extends further. A pitch tracker converts audio frequency to voltage. A transient detector outputs a trigger when it senses an attack. A comparator outputs high or low based on whether a signal exceeds a threshold.

In the MIDI domain, the equivalent is analyzing note data: when notes occur, how densely they cluster, what velocities they carry, how their timing relates to the tempo grid. This information already exists in the MIDI stream — it simply needs to be extracted and converted to CC output. The result is modulation that responds to playing, not just to time.

Playing Dynamics as Control Source

Think about what information is available in a stream of MIDI notes. Each note carries a velocity value. Notes occur at specific times, with varying intervals between them. Multiple notes may sound simultaneously, or the stream may thin to single notes with space between. Over time, patterns emerge: accelerating passages, dense chords, sparse phrases with rubato timing.

All of this is raw material for modulation. Velocity is the most obvious: a soft note could close a filter while a hard note opens it, creating a direct relationship between playing dynamics and timbral brightness. But velocity alone is limited to note-on moments. Between notes, there is no new information. The CC value must either hold steady or decay according to some predetermined curve.

Density offers a different perspective. A rapid passage of notes creates a high-density reading; a sparse phrase creates a low one. Unlike velocity, density is a continuous measurement. It exists between notes as well as on them. A sustained chord might register as high density for as long as the notes overlap, then drop as they release. This creates modulation curves that follow the musical texture rather than individual note events.

Timing analysis adds another dimension. The interval between successive notes reveals something about phrasing that neither velocity nor density captures. Accelerating intervals suggest building tension; decelerating intervals suggest resolution. Playing ahead of or behind the beat suggests different expressive intentions. These temporal relationships can drive modulation that responds to musical direction, not just instantaneous state. (This is the approach taken by tools like Formfactor, which extracts these dimensions from MIDI input and converts them to CC output.)

Practical Example

A filter cutoff controlled by note density: sparse playing keeps the filter closed for a darker tone, while dense chords or rapid passages open it up. The result is a sound that breathes with the music, darker in the spaces and brighter in the activity, without any manual automation.

In MIDI CC workflows, a tool like Formfactor converts note density to CC messages that any plugin can receive. In Bitwig's Grid, you'd patch a Note Counter into a Follower module, then route that signal directly to a filter's cutoff CV input. In Max, an equivalent patch might use [poly~] to track active voices, smooth the count with [slide~], and scale the result to the filter's expected range. Same concept, different wiring.

Tempo-Synced Shaping

Modular LFOs often run freely, their rates set by knob position with no relationship to musical time. This creates a particular kind of movement — organic, drifting, unpredictable in its phase relationship to the music. For some applications this is desirable. For others, especially in rhythmic contexts, it produces results that feel disconnected from the groove.

Tempo-synced modulation solves this by locking the LFO cycle to the project tempo. A one-bar LFO completes exactly one cycle per bar, returning to its starting point on each downbeat. The phase relationship between modulation and music becomes fixed, creating patterns that reinforce the rhythm rather than drifting against it.

The shape of that cycle matters as much as its rate. A simple sine wave rises and falls smoothly, creating gentle undulation. An exponential curve accelerates toward its peak, creating a sense of urgency or lift. A curve that holds steady before snapping to a new value creates gating effects. The vocabulary of easing functions familiar to animators and motion designers applies directly: ease-in, ease-out, elastic, bounce — each shape produces a different musical character when applied to a parameter over time.

The choice of cycle length interacts with musical structure. A one-bar LFO creates variation within a measure. A four-bar LFO creates variation across a phrase. An eight-bar LFO creates gradual evolution that may not repeat until a full section has passed. Longer cycles risk becoming imperceptible; shorter cycles risk becoming distracting. The right length depends on the musical context and the parameter being modulated.

Step Sequencing as Modulation

The step sequencer is one of the oldest electronic music tools, predating both Eurorack and MIDI. In its simplest form, it cycles through a series of values, advancing to the next step on each clock pulse. Applied to pitch, it creates melodic patterns. Applied to other parameters, it creates rhythmic modulation.

The trigger source determines the sequencer's rhythmic relationship to the music. A clock-based trigger advances on tempo subdivisions — every beat, every eighth note, every sixteenth. The resulting pattern locks rigidly to the grid, predictable and precise. A note-based trigger advances on each MIDI note-on, creating patterns that follow the performer's rhythm rather than the metronome. If you play eighth notes, the sequence advances on eighth notes. If you play rubato, the sequence follows.

Combining step sequencing with continuous shaping bridges the gap between discrete steps and smooth motion. Rather than jumping instantly from one value to the next, the transition can follow a curve — linear for mechanical precision, exponential for organic acceleration, or more complex shapes for specific effects. The step values define the targets; the shape defines how the modulation moves between them.

Layered Control

When step sequencing, continuous LFO, and performance analysis combine, they produce modulation more complex than any single source could achieve. A step sequence provides the rhythmic scaffold. An LFO adds continuous movement within and between steps. Analysis of playing dynamics adds responsiveness to performance. Each layer contributes something the others cannot.

Routing and Destination

The modulation source is only half the equation. Equally important is where that modulation is routed. MIDI CC provides a standardized vocabulary for common destinations: CC 1 for mod wheel, CC 7 for volume, CC 10 for pan, CC 11 for expression, CC 74 for filter cutoff. Hardware synthesizers and many software instruments respond predictably to these assignments, making it possible to design modulation routings that work across different instruments.

The choice of destination shapes the musical result more than any other decision. Filter cutoff modulation affects timbre directly — the brightness and harmonic content of the sound. Expression modulation affects dynamics — the sense of emphasis and phrase shaping. Reverb send modulation affects spatial depth — how present or distant the sound feels. Pan modulation affects stereo image — the sense of movement across the sound field.

Running multiple modulation streams to different destinations creates interaction between parameters. A filter that opens as reverb decreases might produce a sound that gets brighter and drier during active passages, darker and more spacious during rests. An expression swell coinciding with a pan sweep might create a sense of a sound emerging from the stereo field as it crescendos. These interactions are where modular thinking yields its most distinctive results.

Working Methods

Starting simple is essential. A single modulation routing — velocity to filter cutoff, for example — establishes a clear cause-and-effect relationship that can be heard and understood. Adding a second routing introduces interaction. Adding a third introduces complexity that may be difficult to predict. Building up gradually allows each layer to be evaluated before the next is added.

Isolating variables helps when something sounds wrong. If four analysis modes all feed into a single CC output, determining which one is causing an unwanted artifact requires muting them one at a time. The ability to selectively disable modulation sources is as important as the ability to enable them.

Matching modulation depth to musical context requires experimentation. A filter modulation that sounds dramatic on a simple pad may become overbearing in a dense mix. A subtle expression variation that works for an exposed solo may be imperceptible when other elements are active. The same modulation routing may need different depth settings in different sections of a piece.

Recording the result — either as audio or as CC automation — provides a reference point. Modulation that responds to performance is by nature variable from take to take. Capturing a particularly successful performance preserves both the notes and the modulation they generated, allowing that specific result to be used in the final mix even if subsequent takes differ.

The Larger Point

Modular synthesis is really just a workflow that prioritizes dynamic parameter control, and the patch cable enforces this by making every modulation routing explicit and visible. Software removes that enforcement but not the capability — the same signal-flow thinking applies whether you're in Bitwig's Grid, Max for Live, Pure Data, or a standard DAW using MIDI CC to carry control signals between tools. Sources generate control data, destinations receive it, and the relationship between them determines how the sound moves. The implementation varies, but the concepts are identical across all of them.

Static sounds are easy to build. Dynamic sounds require thinking about what controls what, and that's really the whole point.

Related

Formfactor implements these concepts as a MIDI effect plugin. It analyzes incoming MIDI across four dimensions — timing, density, velocity, and intensity — then multiplexes those signals into eight independent lanes, each outputting its own MIDI CC stream. Every lane can blend analysis modes with tempo-synced LFOs and step sequencers, allowing a single MIDI input to drive complex, multi-destination modulation.