I just had a Seaboard and wish to switch to a proper MPE capable DAW
I’ve seen a couple of videos about Cubase, Tracktion, Bitwig, Reaper etc… and it seems one of the major difference is being able to see / edit the expression of multiple notes Vs a single selected note. This last approach feels limited to me, but I wonder how problematic this really is.
Could you tell me about your experience there? what do you like / dislike about MPE editing in your DAW?
Thanks Keymanpal!
What I got from this is that there’s a bunch of "us "who just would rather re-record than edit. And no one trying to really sequence from scratch. Which I totally understand, but in my case I feel there’s still some situations where I’d like to fix mistakes that I can’t actually not play right, or experiment tweaks. I have a feeling that in Cubase for instance, this “one by one” note is annoying (isn’t it slow??). Curious to hear your thoughts!
I don’t think I understand what you mean by «one-by-one» note editing, but for what it is worth, I think the way MPE is implemented in Cubase is very good.
Cubase has always taken pride in being the best DAW for midi editing, which is probably why they have put effort into their MPE implementation as well.
Thanks Kai. By “one by one note” I mean I have to select a note to see its envelopes, and can only see / edit one note at a time. As opposed as in Bitwig where the envelopes are visible for all notes(which also seem to allow to tweak the envelopes of several notes at the same time).
Ah! Hmm, it is not very visual, true. Fixing a small mistake here and there is quick and easy to do, but I don’t know about more serious editing. I haven’t used it enough to form a proper impression on the workflow yet, but now I am curious as well. Out travelling atm, but I’ll try to give it a go this weekend and report back.
Couple of things
It’s not limited to cc74
Editing of mpe data is a bit more flexible - you can scale it, and bend it a bit. Bitwig is pretty much moving individual points which with a lot of data a bit tedious.
That’s said I really like bitwig for its other features (clips, the grid, modulators) , Cubase has better mixer - so I’m not sure if mpe editing would be the main decision maker for me.
Sorry, what is this cc74 limitation?
For the rest, that makes a lot of sense. Right, so in Cubase everything is about “drawing” then shaping from what I understand, as in Bitwig this is about dealing with thousands of breakpoints. Which might be cumbersome, I get it. Hummm.
And to come back to my previous question, since you seem to have played with both : do you sometime end up feeling limited by this “one note at a time” edition in Cubase?
Not at my computer at the moment , and not played with it for a while - so this is from memory
Cubase records any CC # per channel, bitwig only cc74 - but not usually an issue since that’s all that most mpe controllers need.
One by one note.
I guess you mean the fact that Cubase has the curves overlaid directly onto the notes. Where as bitwig has it in a separate window at the bottom.
I prefer Cubase approach , its clearer to me.
Guess it depend how much editing your planning on doing , but personally I’d go for whichever daw suits you best.
One thing to bare in mind , bitwig has better mpe support for its internal instruments than Cubase.
I’ve just finished recording our most Eigenharp-centered track. Because of this thread I recorded all the Eigenharp parts as midi in Cubase instead of audio as I normally would. The experience wasn’t good, and I doubt I will be doing that again.
I didn’t find editing MPE midi very useful. Quantizing won’t necessarily get me the results I want as when I first touched a key might not be where I actually started applying pressure, so often quantizing made the timing even worse. And close to useless for breath controlled playing. Manually moving an off note to where it sounds right worked better. And deleting stray notes I had briefly touched here and there cleaned up some sections a bit. I fixed some poorly played pitch bends, but that also felt a bit clumsy. With a PB range of 24, the actual adjustments were so miniscule that they were hard to do graphically.
I also had some weird results, particularly when breath was involved. A SWAM clarinet gave me hanging notes on one specific section every time I played it and I never figured out why. So in the end I played things slightly different to avoid it happening. The exact same thing played with the same settings and plugin in GigPerformer on my Mac did not give me this problem, so I still don’t know what the issue was.
All in all I found MPE midi editing a bit of a hassle and in the way of what I consider the point of using an MPE controller in the first place. I would get better results (and much quicker) by just recording twice as audio so that I could create a composite take if need be. And quantizing the audio would work better than the midi if I had needed to fix the timing.
I know everyone has their own workflow, but I don’t quite see the point of recording MPE midi and won’t bother the next time around. I’m a huge fan of Cubase and have been using it for 25+ years, but I would’t say editing MPE midi is a good reason to get it. (There are a million other good reasons though)
Hi Kai, thank you having taking time to experiment this and this detailed feedback!
Interesting point about the timing / pressure which doesn’t match after quantizing.
What do you mean by : “the actual adjustments were so miniscule that they were hard to do graphically?” can’t you zoom the piano roll in Cubase to adapt the view and get more precision?
Well, to be honest I don’t really know how this would be best done. For me, part of the «experiment» was to just work as I usually would and try to work with MPE midi as best I could based on how I otherwise do things in Cubase. The MPE data is «embedded» as part of a note, and is shown/edited in a small pop-up window next to the note, not down in the controller lane as it normally would. So, I resized the tiny window to make it as large as I could (aprox 1/2 of screen height), but still, some cents of adjustments within a ±24 semi range is visually very small. So in the end I typed values instead of adjusting them with the mouse. If there is a way to zoom in the small popup window or to see the pitchbend values in a controller lane I didn’t figure out how.
even on small projects I tried, this is what I found too…
(and why i raised the other topic of ‘do you bother recording MPE tracks’)
I think its just the quantity of data that we emit with mpe controllers, it becomes quickly unmanageable to find tune everything. sure, cubase has some nice tools for processing individual (mpe) notes.
but, what I think we really lack is higher levels tools. (perhaps in later version of cubase?)
for example: we already can do quantisation on a whole track, with limits on how much it will move notes around.
what about tools for MPE that :
smooth the MPE data out
offsets, or scales
removes awkward ‘spikes’
visualisation - highlight expressive data that looks suspicious, like jumps and spikes.
like many fields in computing, the more data you create - the more important it becomes to pre-process, and visualise it better - its the process of turning data into information.
so my feeling is… at the moment, we are in the early days, we are still in ‘data processing’ times, and we need to move up a level or two, and the be processing at that level.
(a bit like in audio, you can remove individual clicks n’ pops, but its laborious, but something like izotope rx changes the whole process entirely)
perhaps with midi 2.0, per note messaging and hi-res will all become more mainstream, and so daws and alike will start to pay more attention to this area?
True. In my pitchbend example mentioned over, fixing an out of tune bend by drawing a new curve is not really what I would want to do. What I actually would want is to “pitch quantize” to the closest semi tone, say.
Now you got me thinking of how “the perfect” MPE editing tools should be.
I think it’s not only editing, but potentially some area around ‘assists’ ( like midifx) - so far much of this is done by the controller (software/firmware) but there scope elsewhere. (*)
Definitely an interesting area to be explored.
————
(*) I put this down to fact that many of the controllers software predates mpe, and so each controller solved the problems it’s own way.
Eg all controllers handle the idea of mpe and non-mpe modes, and often things like voice count, cc mapping.
I would really like to see a MPE enabled midi stream processing tool–I feel like a MIDI fx plugin that allowed one to code in python or javascript vs the midi stream with MPE aware functions or something like that. This feels like a place where, some barriers for entry being already handled, would allow for some real innovation in this space.
Personally I find the abilities of the controllers themselves and the current mode of placing the parameters for modulating the controller’s output into the driver to be a bad place to be. ROLI’s dashboard doesn’t give me the tools I need to properly interpret and change the raw controller output from my Seaboard.
Additionally, this being a technology that allows for more expressive playing and a deeper relationship with the sound and the the modulation of it I think that the best way forward would be to have deeper and deeper patches with more (and more complex) modulation schemes that are adapted to the instrument.
Let me explain what I mean: When I play the saxophone I have amazingly fine control over many dimensions of sound simultaneously: I can control volume, separate from breath intensity, separate from pitch (mouth control), separate from pitch (fingering control), etc. But even though these controls are discrete they do interact: if you overblow it has an impact on the pitch, etc. To get to the point where we have the ability to make instruments that are expressive to these degrees (not to emulate, but just to have the depth) we need to have the ability to really finely tune all the data coming from the controllers (with curves, envelopes, compressor/expanders, LFOs, etc) and to have them cross-modulate each other, etc, AND, most importantly we need to be able to store these modulation configs with the patches. So when you bring up a patch, you also bring up all the finely tuned controller processing.
In this scheme should the DAW record the raw controller data? or the processed data? or…?
If I ever find the time, I would like to explore some of the concepts that have popped up in this thread.
My plans dont involve trying to write my own MPE editor. Rather I have ideas involving MPE step sequencers and auto-generated MPE data, which somewhat overlap with MPE Midi-Fx that I believe someone mentioned earlier. This would get me familiar with dealing with this quantity of MIDI data. But I will struggle to talk about it more until I’ve actually done some of it, because its the sort of idea where I kind of expect initial exploration will send me off on an interesting tangent that seems to have the most potential.