This is a minimal example to illustrate my setting of tunning something similar to an autoencoder, which includes topology/learning algorithm hyperparameters.
It is necessarily contrived for simplicity.
I start providing the autoencoder with epochs=20
and running hyperopt once, which is set to T trials.
epochs
will be manually set to increased values according to available time in my experimental schedule.
Each new value will need a follow-up run of hyperopt's fmin
; it will extend the trials object by T.
This is intended to allow the TPE optimizer to fastly perform the initial trials at a low computational cost as a warm-up for the increasingly heavier trials.
I also provide the optimizer itself with the epochs
value in a pseudo-interval such that it is frozen in practice.
It is a "hyperparameter" that will be only set manually, however the TPE optimizer should know about it as my changes on it affect the loss value.
If there is a better way of changing a manual parameter and inform it to TPE, please let me know.
The main issue starts here. The higher the value, often the better the model (overfitting is not an issue in my setting). However, the last epoch often does not build the model with the lowest loss value. The "autoencoder" keeps track of its best model and is able to return it along with its respective epoch value.
How can I tell hyperopt, from within the objective function, that the effectively needed value was, e.g., 43
instead of the manually set 80
illustrated below?
import hyperopt as hp
myspace = {"epochs": hp.choice((80, 80.000001)),
"alpha": hp.quniform("alpha", (0.1, 0.9), 1), "beta": hp.quniform("beta", (0.1, 0.9), 1),
"flag": hp.choice(("yes", "no"))}
def objective(space):
# `epochs` is ignored as it is provided directly to the learning algorithm.
quality, true_epochs = algorithm(space["alfa"], space["beta"], space["flag"], epochs=80)
# Is there such a thing like a feedback entry to inform the optimizer the true value of epochs?
feedback = {"FEEDBACK": {"epochs": true_epochs}}
return {"loss": -quality, "status": STATUS_OK, "X_": X_, "FEEDBACK": feedback}
rnd = default_rng(42)
fmin(fn=objective, space=myspace, rstate=rnd)
For simplicity, I omitted that epochs
is actually set by a script.