thats pretty much what my voice class in MEC is doing…
however, I also added voicing, as this is almost always needed as well…
anyway, its not something id want to add to EigenLite, as since already mentioned, I want to keep EigenLite as a thin wrapper over the EigenLabs code.
MEC was kind of designed to be the ‘next level up’,
it also has an ‘api’ layer, so doesn’t haven’t to be used as a concrete implementation.
that said, I kind of lost my way a bit with MEC… I started to using it to wrap other devices as well, and that increased the complexity, and its focus…
this came to a head when I abstracted ‘surface layouts’ and started having difficulties about how this would work with different devices.
basically the bi-directional comms is very challenging , it works nicely for going keys → note scales, and with splits… but sending the LED information back was getting very complex.
…so took a break, and never quite got back to it.
on top of that, I also started chucking in support for a ton of other devices, with some general idea of it becoming a performance environment, that brought all my devices together…
and whilst the design is quite clean in the way I did this, its really has lost that all important focus.
so, really, I need to decide what to do with MEC… I need to re-focus it a bit, and perhaps split things off into different projects, to help reduce the complexity.
… and most importantly, focus on a real day to day use-case for my needs.
(if I actively use it, im more likely to develop it !)