You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Getting a model that can work for everyone with minimal calibration is a hard problem. It likely requires a lot of data that I do not have and would not feel ethically comfortable gathering and uploading to this public repo. I considered looking into topics like federated learning but decided my time would be better spent implementing tools that allow others to gather and explore their own data which they can use to train their own models and experiment with.
The most obvious feature that causes cross session variance is placement of the sensors. My personal belief is calibration is the easiest way to solve this for a niche open source project like this one, although more tools should be made to aid calibration, as mentioned here.
For cross subject generalisation, my initial response is to retrain your own models, especially as it removes the amount of private, likely uniquely identifiable data that needs to be uploaded, or shared. For example. finger classification and gesture models can be trained with a live classifier in less than 3 minutes, which is not too bad for the scope of this project currently.
In regards to future work, transfer learning is the most obvious method, however for privacy reasons, I initially have not gathered data on anyone else but myself and therefore do not have metrics on how my models generalise to others (I would assume badly). Adversarial Domain Adaptation (ADA) was implemented in Sosin et al.'s paper which can be found here, but due to the numerical results in the paper, I did not bother implementing it here.
The text was updated successfully, but these errors were encountered:
Who do you really calibarate the myo. I got my hand on one, but the calibration feels pretty hard in the hand after some time. What kind of calibration did you use?
@18Markus1984 In this project, calibration is based on OpenGloves driver (so no breaking wrist move) to enable machine learning on EMG data and fingers flexion. I recommend you to firstly look at pyomyo.
Getting a model that can work for everyone with minimal calibration is a hard problem. It likely requires a lot of data that I do not have and would not feel ethically comfortable gathering and uploading to this public repo. I considered looking into topics like federated learning but decided my time would be better spent implementing tools that allow others to gather and explore their own data which they can use to train their own models and experiment with.
The most obvious feature that causes cross session variance is placement of the sensors. My personal belief is calibration is the easiest way to solve this for a niche open source project like this one, although more tools should be made to aid calibration, as mentioned here.
For cross subject generalisation, my initial response is to retrain your own models, especially as it removes the amount of private, likely uniquely identifiable data that needs to be uploaded, or shared. For example. finger classification and gesture models can be trained with a live classifier in less than 3 minutes, which is not too bad for the scope of this project currently.
In regards to future work, transfer learning is the most obvious method, however for privacy reasons, I initially have not gathered data on anyone else but myself and therefore do not have metrics on how my models generalise to others (I would assume badly). Adversarial Domain Adaptation (ADA) was implemented in Sosin et al.'s paper which can be found here, but due to the numerical results in the paper, I did not bother implementing it here.
The text was updated successfully, but these errors were encountered: