Replies: 1 comment
-
I can see why people think that Playtime is more complex, with all the fancy GUI stuff and so on, but actually it isn't. ReaLearn is a much, much more involved software than Playtime. Maybe because it's so generic and able to do nearly everything. It even partially covers Playtime's territory with the "Clip" targets, a feature that I want to develop further. I'll think a bit about the automation issue in general and how to make it user friendly for the ReaLearn clip feature or a future Playtime. |
Beta Was this translation helpful? Give feedback.
-
I would like to open a discussion for users of both @helgoboss Reaper extensions, as I imagine I may not be the only one, to suggest setups/discuss feature integration between the two products.
I have to say now Realearn development in the last year has reached tips, which personally I didn't expect to, and functionality well above my expectations. This naturally trumps and makes playtime (a much more complex product, I imagine in theory), much less refined compared to realearn. There are Playtime featuers (such as parameters for undo/redo) which this thread is not supposed to be about, as I would like to only focus on the aspects that are directly linked to realearn, focusing more on workarounds than actual playtime development (for which we all eagerly await Playtime v2),
I'll start by sharing my own experience, after having used both for reharsal and recording for almost 2 years. Note that my playtime setup may be atypical as I don't use playtime tracks and only arm the playtime track with MIDI sends to all the instrument tracks, as I record more than one track at a time, since there are keyboard splits and that is the only way (I found) to use tempo recognition recording playing more than an instrument (on different tracks) at once. If I recorded every track on his own, my life would be immensely simpler.
In my playtime-based live-looping setup, the biggest challenge has always been recording automation, which, notwithstanding automation items, is not trivial as it requires to record the midi CC into Playtime and the use the output of Playtime to drive Realearn. Problems arise very quickly if you are used to use a single CC to control different targets on different tracks. My way of going around the issue was (before Conditional activation and MIDI Send message were implemented in Realearn) to have my own JSFX to create virtual copies of each midi CC so they would look different in Playtime and then Realearn. Now that Realearn has Conditional Activation and MIDI send message, I can simply use 2 Realearn instances, one in the input FX with conditional activation and another one after playtime. Naturally that means duplicating all the assignment. I am open to more smart ways of reaching the same result.
As I am entering the world of MPE (eagerly waiting for my Osmose), I am currently studying how to accomodate an MPE (which uses all the midi channels) + a non MPE instrument in a single playtime session, controlling different instruments (my guess is, using two different inputs, transform the non MPE stream in PolyAT message to be written into Playtime and then retranslated into note message with purpose built JSFX at track level).
Beta Was this translation helpful? Give feedback.
All reactions