no matches for kind "AdmissionConfiguration" in version "apiserver.config.k8s.io/v1"

1.5k views Asked by At

I have AKS with kubernetes version 1.23. I want to activate podsecurity on cluster level by setting it via AdmissionConfiguration as explained here:

https://kubernetes.io/docs/tasks/configure-pod-container/enforce-standards-admission-controller/

As I have read, "PodSecurity feature gate" is enabled on kubernetes version 1.23 by default. I have created a yaml file based on the configuration that is shown on the link however when I apply it I get the following error:

$ k create -f podsecurity.yaml
error: unable to recognize "podsecurity.yaml": no matches for kind "AdmissionConfiguration" in version "apiserver.config.k8s.io/v1"

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.5", 
GitCommit:"c285e781331a3785a7f436042c65c5641ce8a9e9", GitTreeState:"clean", BuildDate:"2022-03-16T15:58:47Z", GoVersion:"go1.17.8", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.5", 
GitCommit:"8211ae4d6757c3fedc53cd740d163ef65287276a", GitTreeState:"clean", BuildDate:"2022-03-31T20:28:03Z", GoVersion:"go1.17.8", Compiler:"gc", Platform:"linux/amd64"}

I googled a lot but couldn't find a solution for it or what it caused it.

I would apprecaite if someone can help.

I am able to activate it in namespace level like it is explained here: https://kubernetes.io/docs/tutorials/security/ns-level-pss/ by adding a label under namesapce however I want to activate it on cluster level and it doesn't work.

4

There are 4 answers

2
Doctor On BEST ANSWER

This is because the administration configuration is not a file to be applied inside the Kubernetes cluster.

It's a static file that must be given to the API-server.

If your Kubernetes cluster is managed by a clouder and you don't have access to the api-server directly you can use a pod security admission webhook in your cluster.
It's very simple to install and works very well.

This way you will be able to edit a configmap containing the cluster wide config.

apiVersion: v1
kind: ConfigMap
metadata:
  name: pod-security-webhook
  namespace: pod-security-webhook
data:
  podsecurityconfiguration.yaml: |
    apiVersion: pod-security.admission.config.k8s.io/v1beta1
    kind: PodSecurityConfiguration
    defaults:
      enforce: "restricted"
      enforce-version: "latest"
      audit: "restricted"
      audit-version: "latest"
      warn: "restricted"
      warn-version: "latest"
    exemptions:
      # Array of authenticated usernames to exempt.
      usernames: []
      # Array of runtime class names to exempt.
      runtimeClasses: []
      # Array of namespaces to exempt.
      namespaces: ["kube-system","policy-test1"]

For more information I have found the EKS documentation pretty usefull : https://aws.github.io/aws-eks-best-practices/security/docs/pods/

You should also note that namespace labels will take precedence over the cluster wide config.

0
czhujer On

@Doctor Yes, same solution, like was written below.

This file is not "usual" CRD, but config file for K8s API-Server.

to @all,

if you haven't control or you can't change configuration of API-server, second option is using policy-engine (OPA-Gatekeeper or kyverno).

Kyverno has existing policy for this https://kyverno.io/policies/psa/add-psa-labels/add-psa-labels/ and several complementary https://kyverno.io/policies/?policytypes=Pod%2520Security%2520Admission

0
k8s-alex On

It should be presented as a file and be fed by --admission-control-config-file flag.

0
Baibhav Vishal On

From the link which you have shared in question. Note: pod-security.admission.config.k8s.io/v1 configuration requires v1.25+. For v1.23 and v1.24, use v1beta1. For v1.22, use v1alpha1.

Which redirects to this https://v1-24.docs.kubernetes.io/docs/tasks/configure-pod-container/enforce-standards-admission-controller/

A little note at the end of page. Since you are using 1.23, use this in line 6 of your yaml config file under apiVersion pod-security.admission.config.k8s.io/v1beta1

Also, you may need to run kube-apiserver --admission-control-config-file=/some/path/pod-security.yaml.

On Rancher k3s, when you are starting the cluster, pass the same flag under k3s service file. For me location was /etc/systemd/system/k3s.service; like

ExecStart=/usr/local/bin/k3s \
    server \
        '--cluster-cidr' \
        '172.16.16.0/20' \
        '--service-cidr' \
        '172.16.0.0/20' \
        '--kube-apiserver-arg=enable-admission-plugins=NodeRestriction,NamespaceLifecycle,PodSecurity,ServiceAccount' \
        '--kube-apiserver-arg=admission-control-config-file=/home/ubuntu/pod-security.yaml' \