spark-submit on Openshift to use specific Worker nodes

270 views Asked by At

I am trying to , spark-submit on Openshift to use specific Worker nodes. below is my command.

./spark/bin/spark-submit \
--master xx:6443 \
--deploy-mode cluster \
--name <Name> \
--class com.xxx \
--conf spark.executor.instances=2 \
--conf spark.kubernetes.namespace=xxxx \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=default \
--conf spark.kubernetes.container.image=my-image \
--conf spark.jars.ivy=/tmp/.ivy \
--conf spark.kubernetes.executor.limit.cores=0.1 \
--conf spark.kubernetes.driver.limit.cores=0.1 \
--conf spark.kubernetes.driver.request.cores=0.1 \
--conf spark.kubernetes.executor.request.cores=0.1 \
local:///opt/xx.jar

I have been given specific worker nodes , with taint key/value pair xxx/yyyy .Can you help how to pass this in spark-submit conf , to use the specific worker nodes.

Thanks.

0

There are 0 answers