How to migrate from CentOS/RHEL-packaged to upstream Docker
This document (000020225) is provided subject to the disclaimer at the end of this document.
Environment
- A Kubernetes cluster launched with the Rancher Kubernetes Engine (RKE) CLI, or a Rancher v2.x launched Kubernetes cluster on custom nodes
- Nodes running CentOS 7.x or RHEL 7.x, with Docker installed from the CentOS/RHEL extras repository.
Situation
Deprecation notice
Docker is not packaged in the RHEL 8 or 9 repositories, and starting with Rancher v2.7 the RHEL-packaged version of Docker (1.13) for RHEL 7 has been removed from the Rancher Support Matrix. Therefore, customers must migrate from RHEL-packaged Docker 1.13 to the upstream Docker version in their RKE clusters.
This article will describe the process by which you can migrate a CentOS or RHEL node in an RKE cluster from running the CentOS/RHEL-packaged Docker package to the upstream package from Docker.
To perform this migration you will be required to first uninstall the CentOS/RHEL packaged Docker, before installing the upstream version. This process is destructive and will remove all container state from the host. As a result, the process outlined below will guide you through first removing the node from the Rancher cluster, before conducting the package migration, then finally re-adding the node to the cluster.
Resolution
Cluster launched by the RKE CLI
Create a Backup
As with any cluster maintenance, it is recommended that you first take an etcd snapshot of the cluster, to recover from in the event of an issue. A snapshot can be created for the cluster, per the RKE documentation here and you should copy the snapshot off an etcd node to a safe location outside the cluster.
Perform migration on each cluster node in turn
-
Check if you should first add an additional node to the cluster, to replace the node during its migration:
Controlplane or etcd nodes In the case that the node is a controlplane or etcd node, it is recommend that you first add an additional node to replace this, or add the role(s) to an existing node, to ensure that quorum is maintained in the event of failure of another node during the process. If the node is the single etcd or controlplane node in the cluster, then adding an additional node to replace it is not an optional step. Add the new etcd and/or controlplane role node to the cluster configuration YAML and run
rke up
to provision this.Worker nodes If the worker nodes within the cluster are heavily loaded, or if the node is the sole worker role node, you should provision an additional worker node, to replace the node during the migration. Add the new worker role node to the cluster configuration YAML and run
rke up
to provision this. -
Remove the node which you are migrating from the cluster, to do so remove the node from the cluster configuration YAML and then run
rke up
to reconcile the cluster. - Once the
rke up
invocation in step 2. completes successfully, run the Extended Rancher 2 cleanup scripton the node that you are migrating, to clean up Rancher state. - Switch to the upstream Docker package on the node, by following the Docker Engine installation documentation for CentOS or using the Rancher installation script for Docker.
- Add the node back to the cluster configuration YAML and run
rke up
to provision it.
Custom cluster launched by Rancher
Create a Backup
As with any cluster maintenance, it is recommended that you first take an etcd snapshot of the cluster, to recover from in the event of an issue. A snapshot can be created for the cluster, per the Rancher documentation here and you should copy the snapshot off an etcd node to a safe location outside the cluster, if S3 backups are not configured for the cluster.
Perform migration on each cluster node in turn
-
Check if you should first add an additional node to the cluster, to replace the node during its migration:
Controlplane or etcd nodes In the case that the node is a controlplane or etcd node, it is recommended that you first add an additional node to replace this, to ensure that quorum is maintained in the event of failure of another node during the process. If the node is the single etcd or controlplane node in the cluster, then adding an additional node to replace it is not an optional step. Add the new etcd and/or controlplane role node by running the Rancher agent command from the 'Edit Cluster' view, with the appropriate roles, on the replacement node.
Worker nodes If the worker nodes within the cluster are heavily loaded, or if the node is the sole worker role node, you should provision an additional worker node, to replace the node during the migration. Add the new worker role node by running the Rancher agent command from the 'Edit Cluster' view, with the worker role, on the replacement node.
-
Remove the node that you are migrating from the cluster, to do so delete it from the node list for the cluster within Rancher.
- Once the cluster reconciliation triggered by step 2. is complete, and the cluster no longer shows as updating within Rancher, run the Extended Rancher 2 cleanup script on the node that you are migrating to clean up Rancher state.
- Switch to the upstream Docker package on the node, by following the Docker Engine installation documentation for CentOS or using the Rancher installation script for Docker.
- Add the node back by running the Rancher agent command from the 'Edit Cluster' view, with the appropriate roles, on the node.
Disclaimer
This Support Knowledgebase provides a valuable tool for SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented "AS IS" WITHOUT WARRANTY OF ANY KIND.
- Document ID:000020225
- Creation Date: 06-May-2021
- Modified Date:01-Mar-2024
-
- SUSE Rancher
For questions or concerns with the SUSE Knowledgebase please contact: tidfeedback[at]suse.com