I've successfully deployed yunikorn on a k8s cluster and can deploy spark application successfully using spark-operator. My problem is that when I want to specify the queue in the sparkapplication, it does not work as expected.
My yunikorn-configs looks like this:
partitions:
- name: default
placementrules:
- name: provided
create: true
nodesortpolicy:
type: binpacking
queues:
- name: root
submitacl: "*"
and the annotation in the sparkapplication yaml looks like this:
kind: SparkApplication
metadata:
name: "my-spark-app"
namespace: spark-op
spec:
sparkVersion: 3.3.1
driver:
annotations:
yunikorn.apache.org/queue: test
.
.
.
executor:
annotations:
yunikorn.apache.org/queue: test
The expected behavior is that yunikorn will launch the application in the root.test
queue, but instead, it launches it in the root.default
queue.
What am I missing here?
Have you tried setting the queue using the spark conf?
Ref: https://spark.apache.org/docs/latest/running-on-kubernetes.html#get-started