I'm trying to convert from pure docker to kubernetes + docker. I use a privileged flag in docker to mount my NFS volumes in the CMD step. For Google Container Engine, this is not allowed - and it seems the prefernece would be to declare my mount as a volume, anyways.

When I do this, my deploys all hang with Pending status as visible in kubectl get pods. To fix, I set the allow_privledged flag in cluster/saltbase/pillar/privilege.sls as seen here. I followed these steps. When the reboot command kicks in all changes are reverted. When not rebooting, the file changes stick and my NFS mount works fine.

How do I permanently edit cluster/saltbase/pillar/privilege.sls to allow_privledged with Google Container Engine to enable my hosts to survive reboot?

1

There are 1 answers

1
Robert Bailey On

Update: Privileged mode is now enabled by default starting with the 1.1 release of Kubernetes which is now available in Google Container Engine.


This is actually a known issue with Kubernetes that will not be fixed in time for the 1.0 release, which means it won't be fixed in Google Container Engine soon either (see #10489). The eventual goal is to replace the allow_privileged flag with an admission control policy, but I don't know how soon after the 1.0 release that feature will land.

In the mean time, Kubernetes supports specifying an NFS mount as part of the pod definition (the kubelet will mount the NFS volume onto the nodes and then mount it into your container) -- you shouldn't need allow_privileged to make this work.

For an example of an NFS mount, check out nfs-web-pod.yaml.