Can someone explain to me the concept of "MPE pitch bend range"?

I’m familiar with the concept of a master pitch bend range of ± 2 semitones. The note(s) is/are bent a whole step up or down. However, I’m quite confused when I commonly see 24, 48, or 96 semitones as a value for the MPE pitch bend range. That range is huge and the pitch bend would sound very excessive.

So can someone please explain to me why the number of semitones is so high and how exactly does the MPE pitch bend range work on a per key basis.

1 Like

Its probably easier to think about if you consider an MPE controller like the ROLI Seaboard or Linnstrument. On those devices you can keep your finger down and move all the way from one end of the keyboard/playing surface to another, covering a large range of semitones that can end up far away from the pitch of the note you originally held down. The note being played was based on the pitch of the note on message, but now the synth needs to be able to play a far away note, even without having received a different note on midi message, and the pitch bend message is used to achieve this.

Devices like the Striso do have a rough equivalent to this too, if you hold down the glissando button then you can move the pitch all the way from one note button you’ve held down to another one that is quite far away, and again this is done via the original notes midi on message and pitch bend midi messages.

1 Like

Steve is on the money with this. Instead of discrete note on events, the single note event is bent to the new note position. Indeed for some generators/synths this can sound very unnatural, as it just increases the speed of playback of the waveform instead of calculating a new waveform at higher frequencies. For MPE compatible generators/synths, it handles this large bend gracefully or compresses waveforms that are amenable to that transformation.

To put into an instrument lingo, it’s a glissando instead of a legato response to changes in notes.

Thank you both @SteveElbows and @Photon. I was starting to think that MPE pitch bend range was related to the glissando effect.

So am I correct in thinking that the MPE pitch bend range is the maximum positive or negative semitone change from the root note? And a glissando to any note before the max/min is possible?

More or less. The actual semi-tone change can be altered or attenuated by the DAW, as it might assume the max pitch value is only two semi-tones (MIDI data vs applied value to the waveform). Software which is built for the MPE standard will usually have that range from 48-96.

The whole conversation get’s a bit complex because MIDI doesn’t work in semi-tones; instead it works with integers of 0-16833. It’s up to the instrument to decide how to interpret the slide input (it could interpret the slide along the same scale, or 1/2x, 2x, 4x etc.) and then up to the software to interpret those values sent from the instrument in the scale of 0-16833. So ultimately it’s very arbitrary what semi-tone is generated; in many cases the instrument and the software assumes 48 semitones of range. AFAIK in many instruments you can turn off the glissando, or have it interpreted as a different control.

Thanks. Why would it ever be advantageous to set the MPE pitch bend range to something smaller (for example, 12 compared to 96).

I have mine at 12 on the Eigenharps since the swam plugins I often use have a max pb range of 12. I never slide more than that anyways.

Different controllers, software and devices can have different default interpretations of the range and amount of change, so you need to be able to change settings so that the controller and the device agree.
A typical symptom of a pitch bend range mismatch with, say a Sensel Morph or a Haken Continuum, is when sliding from one note on the surface to another, lift the finger, then press at the same position, but get a different note. This demonstrates that the movement has been scaled to the wrong range.

When this is happening, it’s more challenging to play because you then have to bend by ear and can’t use the physical distance and surface marking to judge what note you’re ending on. You can still get vibrato and expressive bends when this is happening, but the control you have isn’t as good because you have to bend by ear and reposition fingers/hands for the subsequent notes. It’s just not a natural way to play when the bend range doesn’t match between controller and synth (at least when you’re playing on a continuous surface).

1 Like

So if I understand correctly, the actual value of the MPE pitch bend range is not important, only that it is the same between the controller and synth. Obviously, if both the controller and synth can have a high range that is identical, that is better.

Exactly. Wider range=higher resolution and smoother bends. Controller and sound source need to agree on the proper setting. In most cases it will be the widest range the sound source can use.

Right now, pitch bend is limited to 14 bit resolution. So, whatever range you define, those are divisions of 16,383 values.

That is more than enough for the classic MIDI default of +/- 2.

(note: the rest of these numbers are derived from math, which I am notoriously bad at. consider them rough estimates at best.)

At +/- 12, that’s like 655 values per half step. Still quite smooth.

334 values at +/- 24 isn’t bad.

188 at +/- 48 is where a slow glissando will start to sound stairsteppy, for me.
*This is the default resolution for MPE synths and controllers, per the MIDI specification.

I don’t use 96, ever, under any circumstance.

I do use 48, quite often, because I’m too lazy to reconfigure everything and save presets.

If I’m recording, I’ll usually set everything to 24, because that’s about as far as I’ll ever want to slide.

…and if I know that I’m not taking things that far, I’ll go with 12.

(Plus, some of my hardware maxes out as 12 or 24 anyway. But really, it’s about resolution.)

3 Likes

OK, so the actual value of the MPE pitch bend range is important, since higher values (e.g. 96) have lower resolution and can sound stair-steppy.

I now have a much better understanding of this feature in the MPE specification.

2 Likes

This is a great breakdown of the resolution aspect! I’ve done slow glissandos on SWAM instruments with large pitch bend ranges and didn’t notice the quantization (this is the technical term for the stair steppy-ness for you other folks) of the pitch of the playing note. I wonder if it was because I wasn’t perceptive, I wasn’t slow enough, or if the pitch bend information is interpolated on the software side?

Interpolation using the pitch rate of change seems like a pretty simple method of smoothing it out and elimination quantization, but I guess you can only do that on the generator side of the DAW. I assume VST/VST3 instruments only receive whole integers on the pitch bend. Fascinating stuff.

1 Like

a few extras on this…

resolution = 8192 / N , where N = bend range
(its, 14 bit, so, 16384 is used for bi-polar, so + and -)

what value?
ideally, you’d use the longest slide range your controller has…
e.g. if your grid is 24 cells long, then 24 is ideal (?!)
this allows you to slide from any where to any where and it’ll be correct,
and since the 8192 is spread out evenly over your surface, its the highest resolution possible.

BUT we don’t often need to slide the full length of a controller :wink:
so really, you can set the pitchbend range to what you want your longest slide to be.

the issue is for very large surfaces, this can become a bit too low-resolution.
this is why Haken introduced (optional) MPE+ , where they use 21bit (using an extra CC) for pitchbend, because they used to have a continuum with 93.50 ‘notes’, and so 14 bit was deemed insufficient.

this actually raises a couple of other related topics…

as mention by others, synths will often ‘slew’ midi values, everyone knows how low res standard midi is, so to avoid stepping synths will glide to values so the quantisation effect is not as obvious.
(this has been done for a long time… well before mpe, even on old hardware synths)

its actually interesting to think about how this ‘resolution’ is compared to physical space e.g. if you have a small surface, or if your sensors are not that accurate - having a ton of resolution isn’t really going to help much. e.g. something else might force ‘quantisation’

microtonal - something like the continuum are sometimes used for microtonal work, here resolution is pretty important, as everything is going to have a pitch bend. same situation with non-12 note scales.
so here the ‘slide range’ aspect is not important…

btw: all the above is for continuous surface, something like an Eigenharp/Striso, where you dont slide, you’d typically use much smaller values (higher resolution) - I quite often use +/-2 on my Eigenharps.

but, as mentioned above, now with MPE defaulting to 48 semis, and synths usually coding these into presets, unfortunately… often we leave it all at 48, simply to not have to update the presets :frowning:

in some ways, I think perhaps the default should have been +/- 12, enough for most practical purpose - but I guess, they felt some would get confused as to why they couldn’t slide the full length of their surface. so chose a value that this was less likely to happen.

btw2: I should also point out that MPE does have the ability to communicate between controller and software what the pitchbend range is… theoretically bi-directionally. but Ive never seen it really work that well… though I might just be unlucky :wink:

3 Likes

That’s the promise of MIDI 2.0 that nobody’s talking about.

I have some concerns, not the least of which being that the wealth of community-built midi utilities available are surely going to break this functionality. But once the feature is widely adopted, retrofitting and bulletproofing our tools is going to be an exciting challenge…

this is already in mpe (midi 1.0) …

its part of the standard MPE NRPN, so there in protocol level.
funny, its kind of vague at the moment, where the ‘role’ of deciding pitch bend range lies.

Id assumed (and implemented), the controller would send this NRPN to the sound engine, and so the sound engine would adopt.
(perhaps I assumed this, as whilst implementing controllers its ‘under my control’)

however, I later noticed, in Bitwig, the linnstrument control script (and others), do the opposite.
Bitwig determines the pitchbend range, and then sends it to the controller !
(again, perhaps as expected, as Bitwig felt like it wanted it under their control)

there are benefits in both approaches…
I generally prefer it to be on the controller side, as I usually want it set according to the physical aspects of the controller… which only it knows!
but sometimes being under daw control is better, since it could be talking to multiple plugins with different abilities.

fortunately, as its (already) possible to allow both to implement this, and so sync to what the other is requiring - though still a bit confusing for the user.

really this whole pitchbend range area is a nasty hack, really we should be dealing with fractional pitches, and pitchbend should be put in the historical dustbin :wink:

unfortunately, I don’t it really gets much better with midi 2.0… or should I say, I doubt we’ll see daws/plugins supporting much more … both the expressive (aka mpe) and microtonal side is a bit too niche a use-case - so we seem to be lucky (and be grateful) to get any attention from daws etc.

k, perhaps a bit pessimistic as things have improved over the years… but still a long way off.

I guess that was ambiguous.

For me, the promise of 2.0 is that some of these widely ignored aspects of 1.0 will actually be supported. It’s the impetus to resolve our “chicken or the egg” dilemmas.