Replies: 2 comments 5 replies
-
@RafaD5 great idea! I need to think about this. For sure there is a need to select only models that are fast during prediction. Maybe this should be the default set-up for How do you compute the 0.6 sec limit for a single sample prediction? How are you going to deploy models as REST API? |
Beta Was this translation helpful? Give feedback.
-
@RafaD5 I will do as you suggested. There will be the I will try to implement this in the @RafaD5 I have also a plan to implement drift detection, so if there will be new data it will be easy to detect if you need to retrain the AutoML. Details: #179 Do you have such a problem in your production setting? Would you be interested in testing this feature? |
Beta Was this translation helpful? Give feedback.
-
Hi!
I'm using MLJAR to train models that will be used in production, so prediction time is an important metric.
For example, MLJAR found that the model with the best performance is an ensemble with a hot prediction time of 1.2 seconds for one sample. I have a limit of 0.6 seconds. So what I did was organize the models that make up the ensemble by weight (repeat) and calculate the predictions time of each one.
Then I trimmed from bottom to top until the prediction time of the ensemble was lower than 0.6 seconds and, for my surprise, the performance of the ensemble decreased very little.
I think it would be nice to have a parameter
max_single_prediction_time
so that this is taken into account during training. Furthermore, single models with a prediction time greater thanmax_single_prediction_time
could be ignore and not further optimize in the remaining steps of the training.Beta Was this translation helpful? Give feedback.
All reactions