no announcement yet, but I found the product link is now live…
basically, its an assistant to help you learn piano on the piano m (aka Lumi)
some interesting details… (highlights)
Introducing ROLI Vision
ROLI Vision, Airwave’s infrared technology, reliably tracks all 27 joints in each of your hands at 90 frames per second to give you precise, real-time feedback on your playing.
Tech Specs
Expressive controller
ROLI Vision Hand-tracking Camera
3.5mm TRS Headphone Output 3.5mm
TRS Pedal input (incl. 6.3mm dongle)
2 x USB-C ports (data and power)
Magnetic USB port for use with compatible ROLI Hardware
Class-compliant MIDI over USB
Class-compliant audio interface
65W USB-C PD power supply
221mm x 122mm x 369mm / 812g
Device Requirements
Learning with Airwave requires ROLI Piano M and the ROLI Learn app.
Supported tablets for this setup include:
iPad Air (5th Gen) or newer
iPad Pro (5th Gen) or newer
Samsung Galaxy Tab S7 or newer
Our compatibility testing is focused primarily on recent tablet models from Samsung and Apple (with an Apple M1 or later processor).
its announced incredibly early, its not shipping till March 2025 ! (for 299 euros)
it does feel like potentially this could, mid-term, be something more than a tool for piano-m…
I mean, why say it’s an expressive controller? why class compliant midi?
also my initial impression is, does it really add much value to the learning experience to the piano-m, which already has visual feedback.
… though I admit, we dont have the marketing video (yet), so Im sure that will make things clearer.
Im wondering, if perhaps later this might turn into something more…
though cameras facing down to the ‘table’ are going to be a bit limiting ?
but yeah… I guess the marketing video will make the case clearer
edit: is the piano-m in some of the product shots different? a refresh perhaps?
(or is it just the angle, and the sometimes have 2xpiano-m connected)
Over the years I have very occasionally speculated about the possibility of using camera for expressive input.
In theory this new device could be used for that. Im not sure as it would be better than ROLIs existing seaboards, butt you could use it to add MPE Y to the Piano M for a start.
However when I was reviewing ROLIs most recent accounting filings on the other thread, I noted that their pitch to investors is hinged around how much the music learning economy is allegedly worth. So Im not surprised the emphasis of this new product is all about that. They could expand in this other, expressive direction in future if they want to, but I doubt their projections for how big a market that is could be can come close to their levels of expenditure. So I suspect how long they can continue to exist will be based on demand for the learning version of this product. I am skeptical about how much momentum this product will have, and so 2025 could be a rather tricky year for them.
I note their ‘5 things’ marketing obsession continues with this new product. With the seaboards we had the ‘5 dimensions of expression’ but they have taken this to silly new lengths by shoehorning 2 different sets of 5 things into the marketing for the new product:
“five key dimensions of piano technique: postural, positional, harmonic, rhythmic, and dynamic.”
and
“Based on the five “keys” of sound, sight, touch, vision, and voice, Music Intelligence unites current and future ROLI instruments into an expanding ecosystem for creation and expression.”
I should probably work on refining my concept of 'five reasons this company will continue to burn through ridiculous amounts of money and will not survive without ongoing investment of hefty sums".
yeah, the issue is … as I said during speculation.
this could all be done with a ar headset… a quest 3s is $299, no tablet needed.
and you can use piano vision with any piano/keyboard, a much better way to learn piano (*) than piano-m
ok, I admit, Id not want to wear a headset whilst practicing… but id prefer it over this !
( * ) ok, I don’t think ‘guitar hero style’ is a good way to learn piano… but its the way Roli/Piano vision use … and is, like or not, popular.
Their designers decided to type “Play Video” instead of leveraging common iconography to indicate that one of the photos would be clickable. You basically have to stare at your mouse cursor to find it.
Well one thing I noticed was that the iPad was used to reflect your hand position during performance, so assuming the delay between scanning your finger digits and the screen is tolerable, you may be able to learn how to quickly reduce visual dependency on the piano. The other speculation I have is that the Airwave and ROLI Learn will provide suggestions to improve finger movements when traversing across the keybed, but there is no showcase of such a concept in the video.
I don’t completely get why this is bound to the Piano M. It would make sense if it would work with a(ny) real piano.
Hope that they offer an API to get the hand tracking infos. That would actually be pretty interesting!
Wow, I’m pleasantly surprised. It has precious little to do with whatever was hyped, but it’s great that it’s an accessory, and that it can extend the use of my Seaboard M. I was expecting something with a screen, at ten times the price, but I’m happy that I was wrong. Not sure if the investors will be, though… or anyone who bought into the hype of how this would change “everything.”
Yeah, I havent had time to watch that video but Im starting to see elsewhere that there is a different sort of marketing of this product, involving added expressivity, that they are pushing on different pages or videos. Stuff that goes beyond our initial reactions to the learning stuff earlier in this thread. I will have a look when I get a chance.
It still doesn’t feel like they’ve justified the technology. Like, it’s great these these gestures are visually attached to the playing keys, but what does it mean that they are? Why display different keyboards? Was it important to display any?
To my mind, yes. As an app developer, I’ve always wanted to reference some metaknowledge about the performance. Like… When I receive a note, I’d love to know which finger pressed it. Or at least, which hand. There’s a lot I could do with that information.
I think the hardware could deliver that. But I don’t trust Roli to give developers access.
So… what I need to see is this class compliant MIDI output.
What can we use, without relying on their support?
yeah, its interesting how ‘stand alone’ this will be, as I mentioned in first post, the tech specs do imply stand-alone possibilities (usb class compliant midi/audio).
for me the main question is… are they relying on the tablet to do the processing?
if so, then there is a dependency on the Roli software.
as for the rest, I suspect, it’ll just be pushing out various CCs for the air expression.
as you say, its interesting that theoretically, this offers per finger expression.
they can sense which finger hit a note, then track that finger even after it leaves the keybed… and know when it touches a new key.
ofc, I suspect that this is not being done right now (as was not part of demos), but it seems feasible IF they can track fingers accurately.