Is there an easy way to penalize ML model complexity with hyperopt?

26 views Asked by At

I want to optimize an xgboost model parameter max_depth to decrease the loss.

Hyperopt quickly finds a good loss after 10 iterations of around 0.3 and then turns up max_depth to my upper bound to get just a little improvement on my loss. After 1000 iterations it comes to an max_depth of 18 and a loss of 0.29 although there were models tested with max_depth of 3 and a loss of 0.30.

Is there a way to find a less complex parameter settings with closely the same loss? In specific: Is it possible to penalize the objective function, so that the model complexity only increases max_depth if there is a significant impact on the loss function?

0

There are 0 answers