run spark-operator on k8s cluster

1.1k views Asked by At

Hi all i'm trying to run the pi spark exemple on my k8s cluster. I have installed spark operator, pulling the image and run this command:

kubectl apply -f ./spark-pi.yaml

Documentation here.

When I log the driver pod it gives this:

pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:156: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:namespace:spark-operator-spark" cannot list resource "pods" in API group "" at the cluster scope

When I run the operator pod, it gives this:

pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:156: Failed to watch *v1.Pod: failed to list *v1.Pod: Unauthorized

here my rbac.yaml file for ClusterRole and ClusterRoleBinding ( same file as the origin helm charts file): https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/blob/master/charts/spark-operator-chart/templates/rbac.yaml Any solution?

1

There are 1 answers

0
Ramzi Hosisey On

Before installing the Operator you need to set: ServiceAccount RoleBinding Namespace for the Spark applications (optional but very recommended ) Namespace for the Spark Operator (optional but very recommended)

see the example below :

apiVersion: v1
kind: Namespace
metadata:
  name: spark-operator

apiVersion: v1
kind: Namespace
metadata:
  name: spark-apps

apiVersion: v1
kind: ServiceAccount
metadata:
  name: spark
  namespace: spark-apps

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: spark-operator-role
  namespace: spark-apps
roleRef:
  apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: edit
subjects:
 - kind: ServiceAccount
name: spark
namespace: spark-apps

taken from https://gist.github.com/dzlab/b546a450a9e8cfa5c8c3ff0a7c9ff091#file-spark-operator-yaml