SUSE Support

Here When You Need Us

helm-install-rke-calico fails on downstream clusters; cannot re-use a name that is still in use

This document (000021669) is provided subject to the disclaimer at the end of this document.

Environment

  • Rancher v2.6+
  • A Rancher-provisioned RKE2 cluster

Situation

Following an RKE2 upgrade, the helm-install-rke2-calico Job, in the kube-system Namespace, fails with the error 'cannot re-use a name that is still in use', per the following example from the Pod logs:

+ echo 'Installing helm chart'
+ helm install --set-string global.cattle.systemDefaultRegistry=docker.io --set-string global.clusterCIDR=10.42.0.0/16 --set-string global.clusterCIDRv4=10.42.0.0/16 --set-string global.clusterDNS=10.43.0.10 --set-string global.clusterDomain=cluster.local --set-string global.rke2DataDir=/var/lib/rancher/rke2 --set-string global.serviceCIDR=10.43.0.0/16 --set-string global.systemDefaultIngressClass=ingress-nginx --set-string global.systemDefaultRegistry=docker.io rke2-calico /tmp/rke2-calico.tgz --values /config/values-10_HelmChartConfig.yaml
Error: INSTALLATION FAILED: cannot re-use a name that is still in use

Resolution

  1. Get the list of current rke2-calico helm release secrets from the kube-system Namespace, which will be in a superseded status, e.g.:
    kubectl -n kube-system get secrets --field-selector type=helm.sh/release.v1 -o custom-columns='NAME:.metadata.name,STATUS:.metadata.labels.status' -l name=rke2-calico 
    NAME                                STATUS
    sh.helm.release.v1.rke2-calico.v1   superseded
    sh.helm.release.v1.rke2-calico.v2   superseded
  2. Delete these superseded rke2-calico helm-releases secrets, e.g.:
       kubectl -n kube-system delete secret sh.helm.release.v1.rke2-calico.v1
       kubectl -n kube-system delete secret sh.helm.release.v1.rke2-calico.v2
  3. Delete the rke2-calico helm install Job:
    kubectl -n kube-system delete job helm-install-rke2-calico
  4. Restart the rke2-server service on a server node, to trigger the helm-install-rke2-calico Job, resolving the issue:
    systemctl restart rke2-server

Cause

A failed rke2-calico helm install Job during an RKE2 upgrade leaves the cluster with only superseded rke2-calico releases.

helm -n kube-system history rke2-calico
REVISION        UPDATED                         STATUS          CHART                   APP VERSION     DESCRIPTION     
1               Mon Feb  3 09:35:23 2025        superseded      rke2-calico-v3.29.100   v3.29.1         Install complete
2               Mon Feb  3 09:49:28 2025        superseded      rke2-calico-v3.29.100   v3.29.1         Upgrade complete

                                        

Disclaimer

This Support Knowledgebase provides a valuable tool for SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented "AS IS" WITHOUT WARRANTY OF ANY KIND.

  • Document ID:000021669
  • Creation Date: 15-Jan-2025
  • Modified Date:06-Feb-2025
    • SUSE Rancher

< Back to Support Search

For questions or concerns with the SUSE Knowledgebase please contact: tidfeedback[at]suse.com

tick icon

SUSE Support Forums

Get your questions answered by experienced Sys Ops or interact with other SUSE community experts.

tick icon

Support Resources

Learn how to get the most from the technical support you receive with your SUSE Subscription, Premium Support, Academic Program, or Partner Program.

tick icon

Open an Incident

Open an incident with SUSE Technical Support, manage your subscriptions, download patches, or manage user access.