I need to run pods on multiple nodes with very large (700GB) readonly dataset in Kubernetes. I tried using readonlymany, but it fails in multi-node setup, and in general was very unstable.
Is there a way for pods to create a new persistent disk from a snapshot, attach it to the pod, and destroy it when pod is destroyed? This would allow me to update snapshots once in a while with the new data.
You can manually provision a persistent disk using an existing image on GCP:
Then use it on your pod:
The GCE storage class doesn't support snapshots so unfortunately, you can't do it with PVCs. More info here
Hope it helps.