I've successfully deployed yunikorn on a k8s cluster and can deploy spark application successfully using spark-operator. My problem is that when I want to specify the queue in the sparkapplication, it does not work as expected.

My yunikorn-configs looks like this:

    partitions:
      - name: default
        placementrules:
          - name: provided
            create: true
        nodesortpolicy:
          type: binpacking
        queues:
          - name: root
            submitacl: "*"

and the annotation in the sparkapplication yaml looks like this:

kind: SparkApplication
metadata:
  name: "my-spark-app"
  namespace: spark-op
spec:
  sparkVersion: 3.3.1
  driver:
    annotations:
      yunikorn.apache.org/queue: test
.
.
.
  executor:
    annotations:
      yunikorn.apache.org/queue: test

The expected behavior is that yunikorn will launch the application in the root.test queue, but instead, it launches it in the root.default queue.

What am I missing here?

1

There are 1 answers

0
M Desai On

Have you tried setting the queue using the spark conf?

--conf spark.kubernetes.driver.label.queue=<QUEUE_NAME>
--conf spark.kubernetes.executor.label.queue=<QUEUE_NAME>

Ref: https://spark.apache.org/docs/latest/running-on-kubernetes.html#get-started