Scala: how to modify the default metric for cross validation

646 views Asked by At

I find te code below on this site: https://spark.apache.org/docs/2.3.1/ml-tuning.html

// Note that the evaluator here is a BinaryClassificationEvaluator and its default metric
// is areaUnderROC.
val cv = new CrossValidator()
  .setEstimator(pipeline)
  .setEvaluator(new BinaryClassificationEvaluator)
  .setEstimatorParamMaps(paramGrid)
  .setNumFolds(2)  // Use 3+ in practice
  .setParallelism(2)  // Evaluate up to 2 parameter settings in parallel

As they said the default metric for BinaryClassificationEvaluator is "AUC". How can I do to change this default metric to F1-score?

I tried:

// Note that the evaluator here is a BinaryClassificationEvaluator and its default metric
// is areaUnderROC.
val cv = new CrossValidator()
  .setEstimator(pipeline)
  .setEvaluator(new BinaryClassificationEvaluator.setMetricName("f1"))
  .setEstimatorParamMaps(paramGrid)
  .setNumFolds(2)  // Use 3+ in practice
  .setParallelism(2)  // Evaluate up to 2 parameter settings in parallel

But I got some errors... I search on many sites but I did not find the solution...

2

There are 2 answers

6
gmds On

setMetricName only accepts "areaUnderPR" or "areaUnderROC". You will need to write your own Evaluator; something like this:

import org.apache.spark.ml.evaluation.Evaluator
import org.apache.spark.ml.param.ParamMap
import org.apache.spark.ml.param.shared.{HasLabelCol, HasPredictionCol}
import org.apache.spark.ml.util.Identifiable
import org.apache.spark.sql.types.IntegerType
import org.apache.spark.sql.{Dataset, functions => F}

class FScoreEvaluator(override val uid: String) extends Evaluator with HasPredictionCol with HasLabelCol{

  def this() = this(Identifiable.randomUID("FScoreEvaluator"))

  def evaluate(dataset: Dataset[_]): Double = {
    val truePositive = F.sum(((F.col(getLabelCol) === 1) && (F.col(getPredictionCol) === 1)).cast(IntegerType))
    val predictedPositive = F.sum((F.col(getPredictionCol) === 1).cast(IntegerType))
    val actualPositive = F.sum((F.col(getLabelCol) === 1).cast(IntegerType))

    val precision = truePositive / predictedPositive
    val recall = truePositive / actualPositive
    val fScore = F.lit(2) * (precision * recall) / (precision + recall)

    dataset.select(fScore).collect()(0)(0).asInstanceOf[Double]
  }

  override def copy(extra: ParamMap): Evaluator = defaultCopy(extra)
}
0
panc On

Based on the answer of @gmds. Make sure Spark version >=2.3.

You can also follow the implementation of RegressionEvaluator in Spark to implement other custom evaluators.

I also added isLargerBetter so that the instantiated evaluator can be used in model selection (e.g. CV)

import org.apache.spark.ml.evaluation.Evaluator
import org.apache.spark.ml.param.ParamMap
import org.apache.spark.ml.param.shared.{HasLabelCol, HasPredictionCol, HasWeightCol}
import org.apache.spark.ml.util.Identifiable
import org.apache.spark.sql.types.IntegerType
import org.apache.spark.sql.{Dataset, functions => F}

class WRmseEvaluator(override val uid: String) extends Evaluator with HasPredictionCol with HasLabelCol with HasWeightCol {

    def this() = this(Identifiable.randomUID("wrmseEval"))

    def setPredictionCol(value: String): this.type = set(predictionCol, value)
    
    def setLabelCol(value: String): this.type = set(labelCol, value)
    
    def setWeightCol(value: String): this.type = set(weightCol, value)
    
    def evaluate(dataset: Dataset[_]): Double = {
        dataset
            .withColumn("residual", F.col(getLabelCol) - F.col(getPredictionCol))
            .select(
                F.sqrt(F.sum(F.col(getWeightCol) * F.pow(F.col("residual"), 2)) / F.sum(getWeightCol))
            )
            .collect()(0)(0).asInstanceOf[Double]

    }

    override def copy(extra: ParamMap): Evaluator = defaultCopy(extra)

    override def isLargerBetter: Boolean = false
}

The following is how to use it.

val wrmseEvaluator = new WRmseEvaluator()
    .setLabelCol(labelColName)
    .setPredictionCol(predColName)
    .setWeightCol(weightColName)