Replies: 5 comments 11 replies
-
I think the Genetic algorithm one would be feasible on Rpi hardware. The others will likely be difficult to implement and distribute even on conventional hardware. Tensor flow type neural networks often leverage special hardware. I saw a commercial VST a while ago that use the Genetic algorithm technique. It had a really cool interface where it would generate approximations, and the user could guide the algorithm for example choosing the closest approximation. Synplant is on my wish list for synths: https://www.youtube.com/watch?v=nIllwZc6BIE I saw that machine learning algorithm before. I think if I recall, the machine algorithm was able to emulate the same sounds on the DX7, but used fewer OPs than humans used. Like it only needed three operators to do what humans were doing with seven. |
Beta Was this translation helpful? Give feedback.
-
@RK11111111111, I checked the TensorFlow documentation on using embedded devices for inference and it is possible to run Deep Learning Models on the Raspberry Pi if the TensorFlow models are converted to "TensorFlow lite" format, as long as the model is using compatible ML operators. @probonopd If I understand correctly, I think you are looking for something similar to the genopatch feature in "synplant 2" vst by Sonic Charge, where the input is an audio file and the model's output configures the synth parameters to match the timbre of the input audio. Further Reading: Research paper that inspired the creation of SpiegeLib Interesting DDSP vst projects (similar to the DDX7 method) by google's "magenta" team: |
Beta Was this translation helpful? Give feedback.
-
I've also been thinking of doing something similar with a wavetable synth such as vital with the goal of creating a bare metal Raspberry Pi synth. |
Beta Was this translation helpful? Give feedback.
-
This repo looks very promising: |
Beta Was this translation helpful? Give feedback.
-
https://github.com/Sound2Synth/Sound2Synth also looks promising. |
Beta Was this translation helpful? Give feedback.
-
Traditionally, FM synthesizers have been (wrongly?) known as almost impossibly hard to program "realistically sounding" voices with.
Some sound designers have mastered this "black art", but for "the rest of us"... what if we could record some sound (like on a sampler), and let some AI magic create a DX7 patch matching it closely?
Some theory:
https://fcaspe.github.io/ddx7/
There is even source available to do that:
https://github.com/spiegelib/vst-fm-sound-match
Here are some examples:
https://spiegelib.github.io/spiegelib/examples/fm_sound_match_pages/fm_sound_match_listen.html#fm-sound-match-listen
Beta Was this translation helpful? Give feedback.
All reactions