I've setup a bare metal cluster and want to provide different types of shared storage to my applications, one of which is an s3 bucket I mount via goofys to a pod that exports if via NFS. I then use the NFS client provisioner to mount the share to automatically provide volumes to pods.
Letting aside the performance comments, the issue is that the nfs client provisioner mounts the NFS share via the node's OS, so when I set the server name to the NFS pod, this is passed on to the node and it cannot mount because it has no route to the service/pod.
The only solution so far has been to configure the service as NodePort, block external connections via ufw on the node, and configure the client provisioner to connect to 127.0.0.1:nodeport.
I'm wondering if there is a way for the node to reach a cluster service using the service's dns name?
I've managed to get around my issue buy configuring the NFS client provisioner to use the service's clusterIP instead of the dns name, because the node is unable to resolve it to the IP, but it does have a route to the IP. Since the IP will remain allocated unless I delete the service, this is scalable, but of course can't be automated easily as a redeployment of the nfs server helm chart will change the service's IP.