As per Spark documentation only RDD actions can trigger a Spark job and the transformations are lazily evaluated when an action is called on it.
I see the sortBy
transformation function is applied immediately and it is shown as a job trigger in the SparkUI. Why?
sortBy
is implemented usingsortByKey
which depends on aRangePartitioner
(JVM) or partitioning function (Python). When you callsortBy
/sortByKey
partitioner (partitioning function) is initialized eagerly and samples input RDD to compute partition boundaries. Job you see corresponds to this process.Actual sorting is performed only if you execute an action on the newly created
RDD
or its descendants.