Settle the right number of partition on RDD

279 views Asked by At

I read some comments which says than a good number of partition for a RDD is 2-3 time the number of core. I have 8 nodes each with two 12-cores processor, so i have 192 cores, i setup the partition beetween 384-576 but it doesn't seems works efficiently, i tried 8 partition, same result. Maybe i have to setup other parameters in order to my job works better on the cluster rather than on my machine. I add that the file i analyse make 150k lines.

 val data = sc.textFile("/img.csv",384)
1

There are 1 answers

0
Aakash Aggarwal On

The primary effect would be by specifying too few partitions or far too many partitions.

  1. Too few partitions You will not utilize all of the cores available in the cluster.
  2. Too many partitions There will be excessive overhead in managing many small tasks.

Between the two the first one is far more impactful on performance. Scheduling too many smalls tasks is a relatively small impact at this point for partition counts below 1000. If you have on the order of tens of thousands of partitions then spark gets very slow.

Now, considering your case, you are getting the same results from 8 and 384-576 partitions. Generally the thumb rule says, NoOfPartitions = (NumberOfWorkerNodes*NoOfCoresPerWorkerNode)-1 It says that, as we know, the task is processed by CPU cores. So we should set that many number of partitions which is the total number of cores in the cluster to process-1(for Application Master of driver). That means the each core will process each partition at a time. That means with 191 partitions can improve the performance. Otherwise impact of setting less and more partitions scenario is explained in beginnning.

Hope this will help!!!