When using FIFO scheduler with YARN(FIFO is default right?), I found out YARN reserve some memory/CPU to run the application. Our application doesn't need to reserve any of these, since we want fixed number of cores to do the tasks depending on user's account. This reserved memory makes our calculation inaccurate, so I am wondering if there is any way to solve this. If removing this is not possible, we are trying to scale the cluster(we are using dataproc on GCP), but without graceful decommission, scaling down the cluster is shutting down the job.
Is there any way to get rid of reserved memory?
If not, is there any way to implement graceful decommission to yarn 2.8.1? I found out cases with 3.0.0 alpha(GCP only has beta version), but couldn't find any working instruction for 2.8.1.'
Thanks in advance!
Regarding 2, Dataproc supports YARN graceful decommissioning because Dataproc 1.2 uses Hadoop 2.8.