I am building a platform on top of Kubernetes that, among other requirements, should:
- Be OS agnostic. Any Linux with a sane kernel and cgroup mounts.
- Offer persistent storage by leveraging cluster node disk(s).
- Offer ReadWriteMany volumes or a way to implement shared storage.
- PODs shouldn't be bound to a specific node (like for local persistent volumes)
- Volumes are automatically reattached when PODs are migrated (e.g. due to a node drain or node lost condition)
- Offers data replication at storage level
- Not assume a dedicated raw block device available for each node.
I'm addressing the 1st point by using static binaries for k8s components and container engine. Coupled with minimal host tooling that's also static binaries.
I'm still looking for a solution for persistent storage.
What I evaluated/used so far:
- Rook: Although it meets the requirements in terms of features, there's a bug and the volumes are not moved together with the POD https://github.com/rook/rook/issues/1507
- OpenEBS: It doesn't meet the requirement to be OS agnostic. Requires iSCSI client and tools on each node and that depends on the host O/S https://docs.openebs.io/docs/next/prerequisites.html
So the question is what other option do I have for Kubernetes persistent storage while using the cluster node disks.
The below options can be considered
kubernetes version 1.14.0 on wards supports local persistent volumes. You can make use of local pv's using node labels. You might have to run stateful work loads in HA ( master-slave ) mode so the data would be available in case of node failures
You can install nfs server on one of the cluster node and use it as storage for your work loads. nfs storage supports ReadWriteMany. This might work well if you setup the cluster on baremetal
Rook is also one of the good option which you have already tried but it is not production ready though.
Among the three, first option suits your requirement. Would like to hear any other options from the community.