Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support for sklearn cross-validation #117

Open
Casyfill opened this issue Dec 20, 2017 · 2 comments
Open

support for sklearn cross-validation #117

Casyfill opened this issue Dec 20, 2017 · 2 comments

Comments

@Casyfill
Copy link

how should I "weave" hyperdash Experiment object with cross-validation param dictionary/list ?

For now I have this cell, but I'd love to throw "clean" parameters to hyperdash

%%monitor_cell "RF GRIDSEARCH"

tuned_parameters = {'n_estimators': [20, 50, 100], 'criterion': ['gini', 'entropy'],
                     'max_features':['auto', 'sqrt', 0.2, 0.4],
                     'min_samples_leaf': [50,],
                     'bootstrap':[True,],
                     'oob_score':[True,],
                     'n_jobs':[2,],
                     'random_state':[2017],
                     'class_weight':['balanced'],
                     'verbose':[1,]}

clf = GridSearchCV(RandomForestClassifier(), tuned_parameters, cv=5,
                       scoring=f'{score}_macro')
clf.fit(trainX, trainY)
print(clf.best_params_)

for mean, std, params in zip(means, stds, clf.cv_results_['params']):
        print("%0.3f (+/-%0.03f) for %r" % (mean, std * 2, params))

print("''Detailed classification report:\n
                The model is trained on the full development set.\n
                The scores are computed on the full evaluation set.""")

y_true, y_pred = testY, clf.predict(testX)
print(classification_report(y_true, y_pred))

(resembles example from sklearn readme)

@andrewschreiber
Copy link
Contributor

I’d recommend taking a look at our Experiments API docs (https://github.com/hyperdashio/hyperdash-sdk-py#experiment-instrumentation). It will give you more fine-grained control over the start and end of your experiments.

@Casyfill
Copy link
Author

Thanks, but it is not immediately clear how to use Experiment - neither with GridSearchCV, nor with a simple loop. trying to override parameter in experiment raises an exception - how should I use different sets of params within the same experiment?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants