I have a situation where I can dynamically pass some scorers to a grid search object, either in the form of strings (=accuracy
) as well as custom scorers, created via make_scorer
where the parameter greater_is_better
might be True
or False
. in other words, my scorers are best not always when they are the "greatest" value.
What I want to do is to dynamically extract the "sign" of my scorer from the object so that, when I know how to rank the scores (I know this is trivial if you do it "statically"), be it a pre-configured already defined metric ("accuracy") or a given scoring metric.
I tried to look inside the "core" code of the grid search object
but to no avail. is there some attribute or property I can leverage?
Example:
from sklearn.model_selection import GridSearchCV
def build_grid_1():
cls_1 = GridSearchCV(estimator, param_grid, scoring='accuracy')
cls_1.fit(X, y)
return cls_1
def build_grid_2():
cls_1 = GridSearchCV(estimator, param_grid, scoring=make_scorer(func, greater_is_better=False))
cls_1.fit(X, y)
return cls_1
def build_grid_3():
cls_1 = GridSearchCV(estimator, param_grid, scoring=make_scorer(func, greater_is_better=True))
cls_1.fit(X, y)
return cls_1
def gauge_scoring_sign(grid_search_class):
return scorer_sign
gauge_scoring_sign(build_grid_1()) # 1
gauge_scoring_sign(build_grid_2()) # -1
gauge_scoring_sign(build_grid_3()) # 1