from sklearn.datasets import make_classification
Automatic benchmark model
A Benchmark object has a similar API to a sciki-learn
estimator: you build an instance with the desired arguments, and fit it to the data at a later moment. Benchmarks is a convenience wrapper for reading the training data, passing it through a simplified pipeline consisting of data imputation and a standard scalar, and then the benchmark function calibrated with a grid search.
A gingado
Benchmark object seeks to automatise a significant part of creating a benchmark model. Importantly, the Benchmark object also has a compare
method that helps users evaluate if candidate models are better than the benchmark, and if one of them is, it becomes the new benchmark. This compare
method takes as argument another fitted estimator (which could be itself a solo estimator or a whole pipeline) or a list of fitted estimators.
Benchmarks start with default values that should perform reasonably well in most settings, but the user is also free to choose any of the benchmark’s components by passing as arguments the data split, pipeline, and/or a dictionary of parameters for the hyperparameter tuning.
Base class
gingado
has a ggdBenchmark
base class that contains the basic functionalities for Benchmark objects. It is not meant to be used by itself, but only as a hyperclass for Benchmark objects. gingado
ships with two of these objects that subclass ggdBenchmark
: ClassificationBenchmark
and RegressionBenchmark
. They are both described below in their respective sections.
Users are encouraged to submit a PR with their own benchmark models subclassing ggdBenchmark
.
ggdBenchmark
ggdBenchmark ()
The base class for gingado’s Benchmark objects.
compare
compare (X:numpy.ndarray, y:numpy.ndarray, candidates, ensemble_method='object_default', update_benchmark:bool=True)
Use a testing dataset to compare the performance of the fitted benchmark model with one or more candidate models using a grid search
Type | Default | Details | |
---|---|---|---|
X | np.ndarray | Array-like data of shape (n_samples, n_features) | |
y | np.ndarray | Array-like data of shape (n_samples,) or (n_samples, n_targets) | |
candidates | Candidate estimator or list of candidate estimator(s) | ||
ensemble_method | str | object_default | |
update_benchmark | bool | True | Whether to use the best performing candidate model as the new benchmark |
compare_fitted_candidates
compare_fitted_candidates (X, y, candidates, scoring_func)
document
document (documenter:Optional[gingado.model_documentation.ggdModelDocumen tation]=None)
Document the benchmark model using the template in documenter
Type | Default | Details | |
---|---|---|---|
documenter | ggdModelDocumentation | None | None | A gingado Documenter or the documenter set in auto_document if None. |
predict
predict (X, **predict_params)
Note: only available if the benchmark implements this method.
fit_predict
fit_predict (X, y=None, **predict_params)
Note: only available if the benchmark implements this method.
predict_proba
predict_proba (X, **predict_proba_params)
Note: only available if the benchmark implements this method.
predict_log_proba
predict_log_proba (X, **predict_log_proba_params)
Note: only available if the benchmark implements this method.
decision_function
decision_function (X)
Note: only available if the benchmark implements this method.
score
score (X)
Note: only available if the benchmark implements this method.
score_samples
score_samples (X)
Note: only available if the benchmark implements this method.
Classification tasks
The default benchmark for classification tasks is a RandomForestClassifier
object. Its parameters are fine-tuned in each case according to the user’s data.
ClassificationBenchmark
ClassificationBenchmark (cv=None, default_cv=StratifiedShuffleSplit(n_splits=10, random_state=None, test_size=None, train_size=None), estimator=RandomForestClassifi er(oob_score=True), param_grid={'n_estimators': [100, 250], 'max_features': ['sqrt', 'log2', None]}, param_search=<class 'sklearn.model_selection._search.GridSearchCV'>, scoring=None, auto_document=<class 'gingado.model_documentation.ModelCard'>, random_state=None, verbose_grid=False, ensemble_method=<class 'sklearn.ensemble._voting.VotingClassifier'>)
A gingado Benchmark object used for classification tasks
fit
fit (X:numpy.ndarray, y:Optional[numpy.ndarray]=None)
Fit the ClassificationBenchmark
model
Type | Default | Details | |
---|---|---|---|
X | np.ndarray | Array-like data of shape (n_samples, n_features) | |
y | np.ndarray | None | None | Array-like data of shape (n_samples,) or (n_samples, n_targets) or None |
# some mock up data
= make_classification()
X, y
# the gingado benchmark
= ClassificationBenchmark(verbose_grid=2).fit(X, y)
bm
# note that now the `bm` object can be used as an estimator
assert bm.predict(X).shape == y.shape
Fitting 10 folds for each of 6 candidates, totalling 60 fits
[CV] END ................max_features=sqrt, n_estimators=100; total time= 0.1s
[CV] END ................max_features=sqrt, n_estimators=100; total time= 0.1s
[CV] END ................max_features=sqrt, n_estimators=100; total time= 0.1s
[CV] END ................max_features=sqrt, n_estimators=100; total time= 0.1s
[CV] END ................max_features=sqrt, n_estimators=100; total time= 0.1s
[CV] END ................max_features=sqrt, n_estimators=100; total time= 0.1s
[CV] END ................max_features=sqrt, n_estimators=100; total time= 0.1s
[CV] END ................max_features=sqrt, n_estimators=100; total time= 0.1s
[CV] END ................max_features=sqrt, n_estimators=100; total time= 0.1s
[CV] END ................max_features=sqrt, n_estimators=100; total time= 0.1s
[CV] END ................max_features=sqrt, n_estimators=250; total time= 0.3s
[CV] END ................max_features=sqrt, n_estimators=250; total time= 0.3s
[CV] END ................max_features=sqrt, n_estimators=250; total time= 0.3s
[CV] END ................max_features=sqrt, n_estimators=250; total time= 0.3s
[CV] END ................max_features=sqrt, n_estimators=250; total time= 0.4s
[CV] END ................max_features=sqrt, n_estimators=250; total time= 0.3s
[CV] END ................max_features=sqrt, n_estimators=250; total time= 0.3s
[CV] END ................max_features=sqrt, n_estimators=250; total time= 0.3s
[CV] END ................max_features=sqrt, n_estimators=250; total time= 0.3s
[CV] END ................max_features=sqrt, n_estimators=250; total time= 0.3s
[CV] END ................max_features=log2, n_estimators=100; total time= 0.1s
[CV] END ................max_features=log2, n_estimators=100; total time= 0.1s
[CV] END ................max_features=log2, n_estimators=100; total time= 0.1s
[CV] END ................max_features=log2, n_estimators=100; total time= 0.1s
[CV] END ................max_features=log2, n_estimators=100; total time= 0.1s
[CV] END ................max_features=log2, n_estimators=100; total time= 0.1s
[CV] END ................max_features=log2, n_estimators=100; total time= 0.1s
[CV] END ................max_features=log2, n_estimators=100; total time= 0.1s
[CV] END ................max_features=log2, n_estimators=100; total time= 0.1s
[CV] END ................max_features=log2, n_estimators=100; total time= 0.1s
[CV] END ................max_features=log2, n_estimators=250; total time= 0.3s
[CV] END ................max_features=log2, n_estimators=250; total time= 0.3s
[CV] END ................max_features=log2, n_estimators=250; total time= 0.3s
[CV] END ................max_features=log2, n_estimators=250; total time= 0.3s
[CV] END ................max_features=log2, n_estimators=250; total time= 0.3s
[CV] END ................max_features=log2, n_estimators=250; total time= 0.3s
[CV] END ................max_features=log2, n_estimators=250; total time= 0.3s
[CV] END ................max_features=log2, n_estimators=250; total time= 0.3s
[CV] END ................max_features=log2, n_estimators=250; total time= 0.3s
[CV] END ................max_features=log2, n_estimators=250; total time= 0.3s
[CV] END ................max_features=None, n_estimators=100; total time= 0.1s
[CV] END ................max_features=None, n_estimators=100; total time= 0.1s
[CV] END ................max_features=None, n_estimators=100; total time= 0.1s
[CV] END ................max_features=None, n_estimators=100; total time= 0.1s
[CV] END ................max_features=None, n_estimators=100; total time= 0.1s
[CV] END ................max_features=None, n_estimators=100; total time= 0.1s
[CV] END ................max_features=None, n_estimators=100; total time= 0.1s
[CV] END ................max_features=None, n_estimators=100; total time= 0.1s
[CV] END ................max_features=None, n_estimators=100; total time= 0.1s
[CV] END ................max_features=None, n_estimators=100; total time= 0.1s
[CV] END ................max_features=None, n_estimators=250; total time= 0.3s
[CV] END ................max_features=None, n_estimators=250; total time= 0.4s
[CV] END ................max_features=None, n_estimators=250; total time= 0.4s
[CV] END ................max_features=None, n_estimators=250; total time= 0.3s
[CV] END ................max_features=None, n_estimators=250; total time= 0.4s
[CV] END ................max_features=None, n_estimators=250; total time= 0.3s
[CV] END ................max_features=None, n_estimators=250; total time= 0.3s
[CV] END ................max_features=None, n_estimators=250; total time= 0.3s
[CV] END ................max_features=None, n_estimators=250; total time= 0.3s
[CV] END ................max_features=None, n_estimators=250; total time= 0.3s
Importantly, gingado
automatically provides some information to help the user documentat the benchmark model. More specifically, ggdBenchmark
objects collect model information and pass it to a dictionary with key info
in a field called model_details
.
bm.model_documentation.show_json()
{'model_details': {'developer': 'Person or organisation developing the model',
'datetime': '2023-06-22 11:07:52 ',
'version': 'Model version',
'type': 'Model type',
'info': {'_estimator_type': 'classifier',
'best_estimator_': RandomForestClassifier(n_estimators=250, oob_score=True),
'best_index_': 1,
'best_params_': {'max_features': 'sqrt', 'n_estimators': 250},
'best_score_': 0.8700000000000001,
'classes_': array([0, 1]),
'cv_results_': {'mean_fit_time': array([0.11845815, 0.29829991, 0.11383433, 0.27763405, 0.12743814,
0.32717748]),
'std_fit_time': array([0.00532633, 0.02523737, 0.00180408, 0.00226647, 0.00590799,
0.0308631 ]),
'mean_score_time': array([0.0078788 , 0.01841307, 0.00783651, 0.01708548, 0.00718062,
0.01765988]),
'std_score_time': array([0.00068533, 0.00185872, 0.00076878, 0.00036287, 0.00023495,
0.00156245]),
'param_max_features': masked_array(data=['sqrt', 'sqrt', 'log2', 'log2', None, None],
mask=[False, False, False, False, False, False],
fill_value='?',
dtype=object),
'param_n_estimators': masked_array(data=[100, 250, 100, 250, 100, 250],
mask=[False, False, False, False, False, False],
fill_value='?',
dtype=object),
'params': [{'max_features': 'sqrt', 'n_estimators': 100},
{'max_features': 'sqrt', 'n_estimators': 250},
{'max_features': 'log2', 'n_estimators': 100},
{'max_features': 'log2', 'n_estimators': 250},
{'max_features': None, 'n_estimators': 100},
{'max_features': None, 'n_estimators': 250}],
'split0_test_score': array([0.9, 0.9, 0.9, 0.9, 1. , 1. ]),
'split1_test_score': array([0.7, 0.8, 0.8, 0.8, 0.7, 0.7]),
'split2_test_score': array([0.8, 0.8, 0.8, 0.8, 0.8, 0.8]),
'split3_test_score': array([1., 1., 1., 1., 1., 1.]),
'split4_test_score': array([0.9, 0.9, 0.9, 0.9, 0.9, 0.9]),
'split5_test_score': array([0.8, 0.8, 0.8, 0.8, 0.8, 0.8]),
'split6_test_score': array([0.8, 0.9, 0.9, 0.9, 0.9, 0.9]),
'split7_test_score': array([0.8, 0.8, 0.8, 0.8, 0.8, 0.8]),
'split8_test_score': array([1. , 1. , 1. , 1. , 0.9, 0.9]),
'split9_test_score': array([0.8, 0.8, 0.7, 0.7, 0.8, 0.8]),
'mean_test_score': array([0.85, 0.87, 0.86, 0.86, 0.86, 0.86]),
'std_test_score': array([0.09219544, 0.0781025 , 0.09165151, 0.09165151, 0.09165151,
0.09165151]),
'rank_test_score': array([6, 1, 4, 4, 2, 2], dtype=int32)},
'multimetric_': False,
'n_features_in_': 20,
'n_splits_': 10,
'refit_time_': 0.2795419692993164,
'scorer_': <function sklearn.metrics._scorer._passthrough_scorer(estimator, *args, **kwargs)>},
'paper': 'Paper or other resource for more information',
'citation': 'Citation details',
'license': 'License',
'contact': 'Where to send questions or comments about the model'},
'intended_use': {'primary_uses': 'Primary intended uses',
'primary_users': 'Primary intended users',
'out_of_scope': 'Out-of-scope use cases'},
'factors': {'relevant': 'Relevant factors',
'evaluation': 'Evaluation factors'},
'metrics': {'performance_measures': 'Model performance measures',
'thresholds': 'Decision thresholds',
'variation_approaches': 'Variation approaches'},
'evaluation_data': {'datasets': 'Datasets',
'motivation': 'Motivation',
'preprocessing': 'Preprocessing'},
'training_data': {'training_data': 'Information on training data'},
'quant_analyses': {'unitary': 'Unitary results',
'intersectional': 'Intersectional results'},
'ethical_considerations': {'sensitive_data': 'Does the model use any sensitive data (e.g., protected classes)?',
'human_life': 'Is the model intended to inform decisions about matters central to human life or flourishing - e.g., health or safety? Or could it be used in such a way?',
'mitigations': 'What risk mitigation strategies were used during model development?',
'risks_and_harms': 'What risks may be present in model usage? Try to identify the potential recipients,likelihood, and magnitude of harms. If these cannot be determined, note that they were considered but remain unknown',
'use_cases': 'Are there any known model use cases that are especially fraught?',
'additional_information': 'If possible, this section should also include any additional ethical considerations that went into model development, for example, review by an external board, or testing with a specific community.'},
'caveats_recommendations': {'caveats': 'For example, did the results suggest any further testing? Were there any relevant groups that were not represented in the evaluation dataset?',
'recommendations': 'Are there additional recommendations for model use? What are the ideal characteristics of an evaluation dataset for this model?'}}
It is also simple to define as benchmark a model that you already fitted and still benefit from the other functionalities provided by Benchmark
class. This can also be done in case you are using a saved version of a fitted model (eg, the model you are using in production) and want to have that as the benchmark.
from sklearn.ensemble import RandomForestClassifier
= RandomForestClassifier().fit(X, y)
forest
=forest)
bm.set_benchmark(estimator
assert forest == bm.benchmark
assert hasattr(bm.benchmark, "predict")
assert bm.predict(X).shape == y.shape
Regression tasks
The default benchmark for regression tasks is a RandomForestRegressor
object. Its parameters are fine-tuned in each case according to the user’s data.
RegressionBenchmark
RegressionBenchmark (cv=None, default_cv=ShuffleSplit(n_splits=10, random_state=None, test_size=None, train_size=None), estimator=RandomForestRegressor(oob_score=True), param_grid={'n_estimators': [100, 250], 'max_features': ['sqrt', 'log2', None]}, param_search=<class 'sklearn.model_selection._search.GridSearchCV'>, scoring=None, auto_document=<class 'gingado.model_documentation.ModelCard'>, random_state=None, verbose_grid=False, ensemble_method=<class 'sklearn.ensemble._voting.VotingRegressor'>)
A gingado Benchmark object used for regression tasks
fit
fit (X:numpy.ndarray, y:Optional[numpy.ndarray]=None)
Fit the RegressionBenchmark
model
Type | Default | Details | |
---|---|---|---|
X | np.ndarray | Array-like data of shape (n_samples, n_features) | |
y | np.ndarray | None | None | Array-like data of shape (n_samples,) or (n_samples, n_targets) or None |
from sklearn.datasets import make_regression
from sklearn.ensemble import AdaBoostRegressor
# some mock up data
= make_regression()
X, y
# the gingado benchmark
= RegressionBenchmark().fit(X, y)
bm
# note that now the `bm` object can be used as an estimator
assert bm.predict(X).shape == y.shape
# the user might also like to set another model as the benchmark
= AdaBoostRegressor().fit(X, y)
adaboost =adaboost)
bm.set_benchmark(estimator
assert adaboost == bm.benchmark
assert hasattr(bm.benchmark, "predict")
assert bm.predict(X).shape == y.shape
Fitting 5 folds for each of 6 candidates, totalling 30 fits
[CV] END ................max_features=sqrt, n_estimators=100; total time= 0.1s
[CV] END ................max_features=sqrt, n_estimators=100; total time= 0.1s
[CV] END ................max_features=sqrt, n_estimators=100; total time= 0.1s
[CV] END ................max_features=sqrt, n_estimators=100; total time= 0.1s
[CV] END ................max_features=sqrt, n_estimators=100; total time= 0.1s
[CV] END ................max_features=sqrt, n_estimators=250; total time= 0.3s
[CV] END ................max_features=sqrt, n_estimators=250; total time= 0.3s
[CV] END ................max_features=sqrt, n_estimators=250; total time= 0.3s
[CV] END ................max_features=sqrt, n_estimators=250; total time= 0.3s
[CV] END ................max_features=sqrt, n_estimators=250; total time= 0.3s
[CV] END ................max_features=log2, n_estimators=100; total time= 0.1s
[CV] END ................max_features=log2, n_estimators=100; total time= 0.1s
[CV] END ................max_features=log2, n_estimators=100; total time= 0.1s
[CV] END ................max_features=log2, n_estimators=100; total time= 0.1s
[CV] END ................max_features=log2, n_estimators=100; total time= 0.1s
[CV] END ................max_features=log2, n_estimators=250; total time= 0.3s
[CV] END ................max_features=log2, n_estimators=250; total time= 0.3s
[CV] END ................max_features=log2, n_estimators=250; total time= 0.3s
[CV] END ................max_features=log2, n_estimators=250; total time= 0.3s
[CV] END ................max_features=log2, n_estimators=250; total time= 0.3s
[CV] END ................max_features=None, n_estimators=100; total time= 0.2s
[CV] END ................max_features=None, n_estimators=100; total time= 0.2s
[CV] END ................max_features=None, n_estimators=100; total time= 0.3s
[CV] END ................max_features=None, n_estimators=100; total time= 0.2s
[CV] END ................max_features=None, n_estimators=100; total time= 0.2s
[CV] END ................max_features=None, n_estimators=250; total time= 0.6s
[CV] END ................max_features=None, n_estimators=250; total time= 0.6s
[CV] END ................max_features=None, n_estimators=250; total time= 0.5s
[CV] END ................max_features=None, n_estimators=250; total time= 0.5s
[CV] END ................max_features=None, n_estimators=250; total time= 0.5s
Below we compare the benchmark (set above manually to be the adaboost algorithm) with two other candidate models: a Gaussian process and a linear Support Vector Machine (SVM).
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.svm import LinearSVR
= GaussianProcessRegressor().fit(X, y)
gauss_reg = LinearSVR().fit(X, y)
svm_reg
=[gauss_reg, svm_reg]) bm.compare(X, y, candidates
Benchmark updated!
New benchmark:
Pipeline(steps=[('candidate_estimator', LinearSVR())])
Note that when the benchmark object finds a model that performs better than it does, the user is informed that the benchmark is updated and the new benchmark model is shown. This only happens when the argument update_benchmark
is set to True (as default).
Below we can see by how much it outperformed the other candidates, including the previous benchmark model and an ensemble of the previous benchmark and all the candidates. It is also a good opportunity to see how stable the performance of each model was, as judged by the standard deviation of the scores across the validation folds.
'params', 'mean_test_score', 'std_test_score', 'rank_test_score']] pd.DataFrame(bm.benchmark.cv_results_)[[
params | mean_test_score | std_test_score | rank_test_score | |
---|---|---|---|---|
0 | {'candidate_estimator': (DecisionTreeRegressor... | 0.364793 | 0.190020 | 2 |
1 | {'candidate_estimator': GaussianProcessRegress... | -0.199963 | 0.256781 | 4 |
2 | {'candidate_estimator': LinearSVR(), 'candidat... | 0.544040 | 0.094230 | 1 |
3 | {'candidate_estimator': VotingRegressor(estima... | 0.344233 | 0.117852 | 3 |
General comments on benchmarks
Scoring
ClassificationBenchmark
and RegressionBenchmark
use the default scoring method for comparing model alternatives, both during estimation of the benchmark model and when comparing this benchmark with candidate models. Users are encouraged to consider if another scoring method is more suitable for their use case. More information on available scoring methods that are compatible with gingado
Benchmark objects can be found here.
Data split
gingado
benchmarks rely on hyperparameter tuning to discover the benchmark specification that is most likely to perform better with the user data. This tuning in turn depends on a data splitting strategy for the cross-validation. By default, gingado
uses StratifiedShuffleSplit
(in classification problems) or ShuffleSplit
(in regression problems) if the data is not time series and TimeSeriesSplit
otherwise.
The user may overrun these defaults either by directly setting the parameter cv
or default_cv
when instanciating the gingado
benchmark class. The difference is that default_cv
is only used after gingado
checks that the data is not a time series (if a time dimension exists, then TimeSeriesSplit
is used).
= make_classification()
X, y = ClassificationBenchmark(cv=TimeSeriesSplit(n_splits=3)).fit(X, y)
bm_cls assert bm_cls.benchmark.n_splits_ == 3
= make_regression()
X, y = RegressionBenchmark(default_cv=ShuffleSplit(n_splits=7)).fit(X, y)
bm_reg assert bm_reg.benchmark.n_splits_ == 7
Please refer to this page for more information on the different Splitter
classes available on scikit-learn
, and this page for practical advice on how to choose a splitter for data that are not time series. Any one of these objects (or a custom splitter that is compatible with them) can be passed to a Benchmark
object.
Users that wish to use specific parameters should include the actual Splitter
object as the parameter, as done with the n_splits
parameter in the chunk above.
Custom benchmarks
gingado
provides users with two Benchmark
objects out of the box: ClassificationBenchmark
and RegressionBenchmark
, to be used depending on the task at hand. Both classes derive from a base class ggdBenchmark
, which implements methods that facilitate model comparison. Users that want to create a customised benchmark model for themselves have two options:
the simpler possibility is to train the estimator as usual, and then assign the fitted estimator to a
Benchmark
object.if the user wants more control over the fitting process of estimating the benchmark, they can create a class that subclasses from
ggdBenchmark
and either implements customfit
,predict
andscore
methods, or also subclasses fromscikit-learn
’sBaseEstimator
.- In any case, if the user wants the benchmark to automatically detect if the data is a time series and also to document the model right after fitting, the
fit
method should callself._fit
on the data. Otherwise, the user can simply implement any consistent logic in fit as the user sees fit (pun intended).
- In any case, if the user wants the benchmark to automatically detect if the data is a time series and also to document the model right after fitting, the