To my knowledge, in sklearn sample weights will be incorporated into the impurity formula. Take binary classification and gini impurity as an example:
With sample weights, p_0 will be calculated as:
However, looking into the source code of spark ml, I found the sample weights seem not to be used in calculating class probability. It's only used after the split to reweight the impurities of left and right node for the total impurity. As a result, a highly weighted positive example will not increase the postive probability, instead it only adds to total weight of a node. I'm not sure if my observation is right or wrong, so here to look for some expert clarify this.

