As per Spark documentation only RDD actions can trigger a Spark job and the transformations are lazily evaluated when an action is called on it.
I see the sortBy transformation function is applied immediately and it is shown as a job trigger in the SparkUI. Why?
sortByis implemented usingsortByKeywhich depends on aRangePartitioner(JVM) or partitioning function (Python). When you callsortBy/sortByKeypartitioner (partitioning function) is initialized eagerly and samples input RDD to compute partition boundaries. Job you see corresponds to this process.Actual sorting is performed only if you execute an action on the newly created
RDDor its descendants.