MLFlow: last experiment runs exceed range of values of hyperparameter tuning

39 views Asked by At

I'm currently working on a project using MLFlow Recipes. So far everything went rather smoothly, however I encountered some strange behavior by MLFlow. In my experiments, I use an lgbm classifier alongside hyperparameter tuning (hyperopt) and early stopping. Therefore I defined ranges of acceptable values for some parameters. Here's what the training section of my recipe.yaml looks like:

train:
  predict_scores_for_all_classes: True
  predict_prefix: "predicted_"
  using: "custom"
  estimator_method: estimator_fn
  estimator_params:
    random_state : 42
  tuning:
    enabled: True
    algorithm: "hyperopt.rand.suggest"
    max_trials: 100
    parallelism: 1
    early_stop_fn : early_stopping
    parameters:
        alpha:
          distribution: "uniform"
          low: 0.0001
          high: 0.1
        learning_rate:
          distribution: "uniform"
          low: 0.0001
          high: 0.1
        max_depth:
          distribution: "uniformint"
          low: 1
          high: 3
        n_estimators:
          distribution: "uniformint"
          low: 1000
          high: 10000

Training goes well so far but looking at the experiment runs, I saw that max_depth is always logged as -1 in the last run before training ends, even though the range for max_depth is [1, 3]. Is this some sort of standard behavior, a bug or am I doing something wrong?

I also noticed that in cases when the last run turns out to beat the previously best model, it's being registered without going through another early stopping cycle (ten iterations). Maybe this has something to do with it, maybe it's another problem?

0

There are 0 answers