An open source project from Data to AI Lab at MIT.
A simple, extensible backend for developing auto-tuning systems.
- Free software: MIT license
- Documentation: https://HDI-Project.github.io/BTB
- Homepage: https://github.com/HDI-Project/BTB
Bayesian Tuning and Bandits is a simple, extensible backend for developing auto-tuning systems such as AutoML systems. It is currently being used in ATM (an AutoML system that allows tuning of classifiers) and MIT's system for the DARPA Data driven discovery of models program.
BTB is under active development. If you come across any issues, please report them here.
BTB has been developed and tested on Python 3.5, 3.6 and 3.7
Also, although it is not strictly required, the usage of a virtualenv is highly recommended in order to avoid interfering with other software installed in the system where BTB is run.
These are the minimum commands needed to create a virtualenv using python3.6 for BTB:
pip install virtualenv
virtualenv -p $(which python3.6) btb-venv
Afterwards, you have to execute this command to have the virtualenv activated:
source btb-venv/bin/activate
Remember about executing it every time you start a new console to work on BTB!
After creating the virtualenv and activating it, we recommend using pip in order to install BTB:
pip install baytune
This will pull and install the latest stable release from PyPi.
With your virtualenv activated, you can clone the repository and install it from
source by running make install
on the stable
branch:
git clone [email protected]:HDI-Project/BTB.git
cd BTB
git checkout stable
make install
If you want to contribute to the project, a few more steps are required to make the project ready for development.
Please head to the Contributing Guide for more details about this process.
Tuners are specifically designed to speed up the process of selecting the optimal hyper parameter values for a specific machine learning algorithm.
btb.tuning.tuners
defines Tuners: classes with a fit/predict/propose interface for
suggesting sets of hyperparameters.
This is done by following a Bayesian Optimization approach and iteratively:
- letting the tuner propose new sets of hyper parameter
- fitting and scoring the model with the proposed hyper parameters
- passing the score obtained back to the tuner
At each iteration the tuner will use the information already obtained to propose the set of hyper parameters that it considers that have the highest probability to obtain the best results.
To instantiate a Tuner
all we need is a Tunable
class with a collection of
hyperparameters
.
>>> from btb.tuning import Tunable
>>> from btb.tuning.tuners import GPTuner
>>> from btb.tuning.hyperparams import IntHyperParam
>>> hyperparams = {
... 'n_estimators': IntHyperParam(min=10, max=500),
... 'max_depth': IntHyperParam(min=10, max=500),
... }
>>> tunable = Tunable(hyperparams)
>>> tuner = GPTuner(tunable)
Then we perform the following three steps in a loop.
-
Let the Tuner propose a new set of parameters:
>>> parameters = tuner.propose() >>> parameters {'n_estimators': 297, 'max_depth': 3}
-
Fit and score a new model using these parameters:
>>> model = RandomForestClassifier(**parameters) >>> model.fit(X_train, y_train) RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini', max_depth=3, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=297, n_jobs=1, oob_score=False, random_state=None, verbose=0, warm_start=False) >>> score = model.score(X_test, y_test) >>> score 0.77
-
Pass the used parameters and the score obtained back to the tuner:
tuner.record(parameters, score)
At each iteration, the Tuner
will use the information about the previous tests
to evaluate and propose the set of parameter values that have the highest probability
of obtaining the highest score.
The selectors are intended to be used in combination with tuners in order to find out and decide which model seems to get the best results once it is properly fine tuned.
In order to use the selector we will create a Tuner
instance for each model that
we want to try out, as well as the Selector
instance.
>>> from sklearn.ensemble import RandomForestClassifier
>>> from sklearn.svm import SVC
>>> from btb.selection import UCB1
>>> from btb.tuning.hyperparams import FloatHyperParam
>>> models = {
... 'RF': RandomForestClassifier,
... 'SVC': SVC
... }
>>> selector = UCB1(['RF', 'SVC'])
>>> rf_hyperparams = {
... 'n_estimators': IntHyperParam(min=10, max=500),
... 'max_depth': IntHyperParam(min=3, max=20)
... }
>>> rf_tunable = Tunable(rf_hyperparams)
>>> svc_hyperparams = {
... 'C': FloatHyperParam(min=0.01, max=10.0),
... 'gamma': FloatHyperParam(0.000000001, 0.0000001)
... }
>>> svc_tunable = Tunable(svc_hyperparams)
>>> tuners = {
... 'RF': GPTuner(rf_tunable),
... 'SVC': GPTuner(svc_tunable)
... }
Then we perform the following steps in a loop.
-
Pass all the obtained scores to the selector and let it decide which model to test.
>>> next_choice = selector.select({ ... 'RF': tuners['RF'].scores, ... 'SVC': tuners['SVC'].scores ... }) >>> next_choice 'RF'
-
Obtain a new set of parameters from the indicated tuner and create a model instance.
>>> parameters = tuners[next_choice].propose() >>> parameters {'n_estimators': 289, 'max_depth': 18} >>> model = models[next_choice](**parameters)
-
Evaluate the score of the new model instance and pass it back to the tuner
>>> model.fit(X_train, y_train) RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini', max_depth=18, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=289, n_jobs=1, oob_score=False, random_state=None, verbose=0, warm_start=False) >>> score = model.score(X_test, y_test) >>> score 0.89 >>> tuners[next_choice].record(parameters, score)
For more details about BTB and all its possibilities and features, please check the project documentation site!
If you use BTB, please consider citing our related papers.
For the current design of BTB and its usage within the larger Machine Learning Bazaar project at the MIT Data To AI Lab, please see:
Micah J. Smith, Carles Sala, James Max Kanter, and Kalyan Veeramachaneni. "The Machine Learning Bazaar: Harnessing the ML Ecosystem for Effective System Development." arXiv Preprint 1905.08942. 2019.
@article{smith2019mlbazaar,
author = {Smith, Micah J. and Sala, Carles and Kanter, James Max and Veeramachaneni, Kalyan},
title = {The Machine Learning Bazaar: Harnessing the ML Ecosystem for Effective System Development},
journal = {arXiv e-prints},
year = {2019},
eid = {arXiv:1905.08942},
pages = {arXiv:1905.08942},
archivePrefix = {arXiv},
eprint = {1905.08942},
}
For the initial design of BTB, usage of Recommenders, and initial evaluation, please see:
Laura Gustafson. "Bayesian Tuning and Bandits: An Extensible, Open Source Library for AutoML." Masters thesis, MIT EECS, June 2018.
@mastersthesis{gustafson2018bayesian,
author = {Gustafson, Laura},
title = {Bayesian Tuning and Bandits: An Extensible, Open Source Library for AutoML},
month = {May},
year = {2018},
url = {https://dai.lids.mit.edu/wp-content/uploads/2018/05/Laura_MEng_Final.pdf},
type = {M. Eng Thesis},
school = {Massachusetts Institute of Technology},
address = {Cambridge, MA},
}