Overview
I am writing a Kubernetes controller for a VerticalScaler CRD that can vertically scale a Deployment in the cluster. My spec references an existing Deployment object in the cluster. I'd like to enqueue a reconcile request for a VerticalScaler if the referenced Deployment is modified or deleted.
// VerticalScalerSpec defines the desired state of VerticalScaler.
type VerticalScalerSpec struct {
// Name of the Deployment object which will be auto-scaled.
DeploymentName string `json:"deploymentName"`
}
Question
Is there a good way to watch an arbitrary resource when that resource is not owned by the controller, and the resource does not hold a reference to the object whose resource is managed by the controller?
What I Found
I think this should be configured in the Kubebuilder-standard SetupWithManager function for the controller, though it's possible a watch could be set up someplace else.
// SetupWithManager sets up the controller with the Manager.
func (r *VerticalScalerReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&v1beta1.VerticalScaler{}).
Complete(r)
}
I've been searching for a good approach in controller-runtime/pkg/builder and the Kubebuilder docs. The closest example I found was the section "Watching Arbitrary Resources" in the kubebuilder-v1 docs on watches:
Controllers may watch arbitrary Resources and map them to a key of the Resource managed by the controller. Controllers may even map an event to multiple keys, triggering Reconciles for each key.
Example: To respond to cluster scaling events (e.g. the deletion or addition of Nodes), a Controller would watch Nodes and map the watch events to keys of objects managed by the controller.
My challenge is how to map the Deployment to the depending VerticalScaler(s), since this information is not present on the Deployment. I could create an index on the VerticalScaler and look up depending VerticalScalers from the MapFunc using a field selector, but it doesn't seem like I should do I/O inside a MapFunc. If the list-Deployments operation failed I would be unable to retry or re-enqueue the change.
I have this code working using this imperfect approach:
const deploymentNameIndexField = ".metadata.deploymentName"
// SetupWithManager sets up the controller with the Manager.
func (r *VerticalScalerReconciler) SetupWithManager(mgr ctrl.Manager) error {
if err := r.createIndices(mgr); err != nil {
return err
}
return ctrl.NewControllerManagedBy(mgr).
For(&v1beta1.VerticalScaler{}).
Watches(
&source.Kind{Type: &appsv1.Deployment{}},
handler.EnqueueRequestsFromMapFunc(r.mapDeploymentToRequests)).
Complete(r)
}
func (r *VerticalScalerReconciler) createIndices(mgr ctrl.Manager) error {
return mgr.GetFieldIndexer().IndexField(
context.Background(),
&v1beta1.VerticalScaler{},
deploymentNameIndexField,
func(object client.Object) []string {
vs := object.(*v1beta1.VerticalScaler)
if vs.Spec.DeploymentName == "" {
return nil
}
return []string{vs.Spec.DeploymentName}
})
}
func (r *VerticalScalerReconciler) mapDeploymentToRequests(object client.Object) []reconcile.Request {
deployment := object.(*appsv1.Deployment)
ctx, cancel := context.WithTimeout(context.Background(), time.Minute)
defer cancel()
var vsList v1beta1.VerticalScalerList
if err := r.List(ctx, &vsList,
client.InNamespace(deployment.Namespace),
client.MatchingFields{deploymentNameIndexField: deployment.Name},
); err != nil {
r.Log.Error(err, "could not list VerticalScalers. " +
"change to Deployment %s.%s will not be reconciled.",
deployment.Name, deployment.Namespace)
return nil
}
requests := make([]reconcile.Request, len(vsList.Items))
for i, vs := range vsList.Items {
requests[i] = reconcile.Request{
NamespacedName: client.ObjectKeyFromObject(&vs),
}
}
return requests
}
Other Considered Approaches
Just to cover my bases I should mention I don't want to set the VerticalScaler as an owner of the Deployment because I don't want to garbage collect the Deployment if the VerticalScaler is deleted. Even a non-controller ownerReference causes garbage collection.
I also considered using a Channel watcher, but the docs say that is for events originating from outside the cluster, which this is not.
I could also create a separate controller for the Deployment, and update some field on the depending VerticalScaler(s) from that controller's Reconcile function, but then I would also need a finalizer to handle triggering a VerticalScaler reconcile when a Deployment is deleted, and that seems like overkill.
I could have my VerticalScaler reconciler add an annotation to the Deployment, but there's a probability that the Deployment annotations can be overwritten if managed by for example Helm. That also would not cause a reconcile request in the case where the VerticalScaler is created before the Deployment.
You do indeed use a map function and a normal watch. https://github.com/coderanger/migrations-operator/blob/088a3b832f0acab4bfe02c03a4404628c5ddfd97/components/migrations.go#L64-L91 shows an example. You do end up often having to do I/O in the map function to work out which of the root objects this thing corresponds to, but I agree it kind of sucks that there's no way to do much other than log or panic if those calls fail.
You can also use non-controller owner references or annotations as a way to store the mapped target for a given deployment which makes the map function much simpler, but also usually less responsive. Overall it depends on how dynamic this needs to be. Feel free to pop on the #kubebuilder Slack channel for help.