Hi all i'm trying to run the pi spark exemple on my k8s cluster. I have installed spark operator, pulling the image and run this command:
kubectl apply -f ./spark-pi.yaml
Documentation here.
When I log the driver pod it gives this:
pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:156: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:namespace:spark-operator-spark" cannot list resource "pods" in API group "" at the cluster scope
When I run the operator pod, it gives this:
pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:156: Failed to watch *v1.Pod: failed to list *v1.Pod: Unauthorized
here my rbac.yaml file for ClusterRole and ClusterRoleBinding ( same file as the origin helm charts file): https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/blob/master/charts/spark-operator-chart/templates/rbac.yaml Any solution?
Before installing the Operator you need to set: ServiceAccount RoleBinding Namespace for the Spark applications (optional but very recommended ) Namespace for the Spark Operator (optional but very recommended)
see the example below :
taken from https://gist.github.com/dzlab/b546a450a9e8cfa5c8c3ff0a7c9ff091#file-spark-operator-yaml