You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The way in which we perform cross validation now, is quite slow as we must; convert mutations to be with respect to the reference sequence, build the one-hot matrix, and finally compile the predictive function before making predictions. One thing that would drastically speed things up (and simplify the interface / source code) would be to add the option to split the full dataset at the time of Data initialization (the place to do that would be here). Then, getting validation loss would be as simple and fast as getting loss - and then it would be trivial to get conditional loss.
The text was updated successfully, but these errors were encountered:
The way in which we perform cross validation now, is quite slow as we must; convert mutations to be with respect to the reference sequence, build the one-hot matrix, and finally compile the predictive function before making predictions. One thing that would drastically speed things up (and simplify the interface / source code) would be to add the option to split the full dataset at the time of
Data
initialization (the place to do that would be here). Then, getting validation loss would be as simple and fast as getting loss - and then it would be trivial to get conditional loss.The text was updated successfully, but these errors were encountered: