In R
, I can write the following:
## Explicit
Reduce(function(x,y) x*y, c(1, 2, 3))
# returns 6
However, I can also do this less explicitly with the following:
## Less explicit
Reduce(`*`, c(1, 2, 3))
# also returns 6
In
pyspark
, I could do the following:
rdd = sc.parallelize([1, 2, 3])
rdd.reduce(lambda a, b: a * b)
Question: Can you mimic the "shorthand" (less explicit) syntax of R's
Reduce('*', ...)
with pyspark
or some sort of anonymous function?
In R, you're supplying a binary function. The multiply operator (as with all operators) is actually a binary function. Type
to see what I mean.
In Python, the equivalent for multiplication is
operator.mul
.So: