Recording EM Engine in different DAWS

How has your success been at recording the internal engine in different DAWS?

I had no issues with Ableton Live but cant seem to record or see the key left/right vibrato in Bitwig…no joy at all in Studio One.

A few set up guides for each DAW would be useful!

1 Like

In Bitwig you’ll need to select a controller that has been marked as using MPE.
(perhaps latest release added an Osmose one? Ive not checked yet…)

I’ll check it out later… been too busy playing with the Osmose as standalone instrument…
(and honestly, I rarely bother recording MPE midi… I usually prefer to go straight to audio)

I don’t know about Studio One… does it have multichannel or MPE support?

Cubase also is very good… in fact if, I suspect it might be one of the few that can record MPE+ if thats important to you.
(frankly MPE+ is not really a big deal on the Osmose, due to the having a short physical slide for X)

as for guides… hmm, really checking any of the other MPE controllers would give you most of what you need… I think theres a section in the Continuum User Guide, and also the Roli Seaboard, possibly also the Linnstrument.
that said… its an area thats changing quite a bit, so its likely they are out of date.
(e.g. Live’s support is relative recent compared to others)

generally the process is… you’ll either have to mark a controller or a track as MPE in the daw.
a few will ‘auto-detect’ an MPE controller - but of course, its quite possible this wont happen with the Osmose (being new)

2 Likes

I don’t think it matters on your first port, but if you’re using the second one, and if your DAW renumbers MPE channels (glares suspiciously at Ableton), you’ll need to make sure your DAW isn’t including channels 15 or 16 in that pool.

Yes thats an issue I expect to run into with the Squarp Hapax at some point, it currently doesnt let you limit the MPE range, although it starts from the lowest channels so I havent triggered the problem in practice yet.

There is one advantage to Hapax coming up with its own channel numbers. It has an overdub facility and it is nice to be able to use this with looping MIDI to build up additional layers on top of the existing MPE data, which would end up being a conflicting mess if it just used the channels that were sent by the MPE controller you are playing on, rather than intelligently assigning new unused ones.

1 Like

It’s going to be pretty common to for a daw/sequencer to ‘re-voice’ mpe notes… basically because it’s necessary if you start editing notes…
e.g.
imagine adding new notes, they have no channel, and the one used would depend upon other active notes etc
also, what if you have use a controller set to 15 voices, but only played up to 4… and then shifted it to a vst the only had 4 voices?

frankly this area is a bit of a mess, always has been …centering around who is responsible for voice allocation?

also lets remember, in non-mpe midi, thats polyphonic… the sound engine is responsible for this.
so natural to feel that should be the case here too!?

as for hapax… squarp have not really followed thru on the mpe side (yet!?)
I did have a long conversation with them, about areas their ‘simplification’ would create potential issues on - and this included using restricted channel numbers for outputs.

though I admit not on input… so that would need to be raised with squarp.
(not recording 15/16… I cannot see them allow arbitrary recording to capture mpe+ data, or enough demand for explicit mpe+ support)

as for recording in daws…
unfortunately many daws are now doing a limited MPE implementation, and explicitly looking for ch press/cc74/pitchbend… and ignoring the mpe specs (which says you can have other voice messages)

perhaps an interesting one to check would be Cubase… although it now has MPE support, for years before, it had multi channel recording (on one track), which was not fixed to the mpe spec… actually I suspect it goes back to the per voice messages they introduced in VST3?
anyway, it then acted like a midi recorder. not sure if re-voiced channels, I suspect it might NOT have.

again, to be clear if testing, Im NOT talking about its mpe mode, which may have been restricted.
(also, this was an older version of cubase, not sure if its changed on the later versions)

Technically I dont think they are breaking the spec. Because CC’s other than 74 on member channels are listed in the Appendix E table of the spec document (v1.1) as being optional, not mandatory, so receivers arent breaking the spec by ignoring them. The impression that only CC74 matters is also compounded by the focus of the wording in the rest of the document, where their choice of words puts all the emphasis on the mandatory things. For example they say things like " MPE offers per-note expressive control using the following messages" and then lists only the ones we are used to like pitch, channel pressure and CC74. They also mention only a third dimension of per-note expression on a number of occasions, for example " In addition to being able to express per-note pitch (Pitch Bend) and pressure (Channel Pressure), a third dimension of per-note-control may be expressed using MIDI Control Change #74.".

In regards channel reallocation, the official spec deals with both scenarios by referring to the two relevant traditional MIDI modes, Poly Mode and Mono Mode. For Poly mode it explicitly says that “Channel numbers do not have to be preserved during editing. Member Channels could be dynamically reassigned during playback or retransmission.”. For Mono Mode, it says that channels do need to be preserved. Unfortunately the detail of these two modes is not something I think there is a high awareness of for all MIDI users, and a lot of hardware and software never really acknowledges these modes or their implications in their user interfaces and implementations. I’m not a big Logic user but in the past my impression was that Mono Mode was explicitly supported and selectable, as the way to enable MPE in that DAW (eg back when ROLI were first teaching people how to setup MPE). I dont know if the situation is still the same these days, but if it is then thats probably another DAW option to preserve channels.

If I want to be optimistic about DAW etc flexibility and support in the long term, I suppose I look to MIDI 2.0. With the hope that in order to make the backend of DAWs etc fully able to support the broader per-note expression capabilities of MIDI 2.0, they will need to lay down data structures and UI paradigms that support a lot more per-note continuous messages, and we might hope that these underlying changes spill over into features that support backwards compatibility with MIDI 1.0 MPE. Or if not, 3rd parties may build helpful bridging utilities between these two MIDI eras.

Likewise I might hope that CLAPs version of per voice messages gain more traction and broader developer awareness than VST3 per voice messages have managed to date.

I intend to dig into both these areas more myself in future, although in the case of MIDI 2.0 I am waiting for the update spec to become available in the next few months before I start reading the documentation all over again. There have been changes, but only association members know the detail of them at this moment.

1 Like

I know the details of the spec… but my points was as you quoted…

when considering MPE support, unfortunately, whilst ‘demand’ is growing… its still pretty niche.
so, developers often just implement the ‘bare necessities’ they are not interested in all the details.

put blunty, many are not willing to put the effort into all the extra use-cases, and just want to keep it simple… and cover (what they see as) the 90% use-case, and do it in such a way that requires least effort/change on their existing software.

frankly, I think we are pretty lucky many have even gone as far as they have…
and Ive a suspicion, some of it may be down to ‘prepping’ for midi 2.0 and note expression, that goes beyond expressive controllers/mpe ( * )


( * ) developers naturally break down big tasks, like supporting midi 2.0… and supporting per note expression in their software/data model could be seen as ‘one step’ on that road.

Well when it comes to use-cases, the use of other CCs on member channels has both a lack of scenarios in practice on the synth and controller side of things, and also suffers from the nature of the central narrative in the official MPE spec. The way that document is written doesnt draw much attention to other CCs really, nor does it make it clear whether these are really anticipated to be used for continuous signals equivalent to the way CC74 is used, as opposed to sporadic single values. They arent given equal status so I cant blame developers for overlooking them, especially without particular scenarios staring them in the face via MPE controllers that have even more expressive dimensions. Many of the scenarios where I might want to use such things now, with the synths and controllers available, would only be to use a different CC instead of 74 simply to overcome particular limitations with certain synths limited use of CC74. There may also be a practical consideration that cause the spec writers and implementers to shy away from additional continuous messages, eg bandwidth and data storage size issues. MIDI 2.0 wont have the same constrain expectations. To give one example, if it was commonly understood that MPE support required many more continuous sigmals to be recognised, then we may never have got any MPE support in the Squarp Hapax in the first place, since they have to consider what maximum size of data a track and full set of tracks can reach when calculating their RAM and project file size capacity.

I am certainly pleased with how much support we got by this stage. Likely down to a combination of advocates like Geert Bevin, the MIDI association waking up, ROLI burning tens of millions of pounds, volume of user requests (loud despite being a minority) and MPE support in certain sectors becoming something that became a competitive advantage or at least something for the marketing departments to highlight. Plus no doubt some developers dabbled with MPE controllers and could see the compelling aspects of expressive playing of their own synth offerings. All the same I have noted a fair degree of skepticism even from some of the companies that did stick their toes in the MPE waters. eg both Cherry Audio and Sequential added MPE to some of their synths, but have continued to express skepticism about whether its worth it, and have left it out of subsequent offerings so far. I look to the likes of the Osmose for some maintenance and renewal of MPE momentum in the coming years.

1 Like