Node labels and taints reset on reboot with AWS cloudprovider in Kubernetes lower than v1.12.0
This document (000020207) is provided subject to the disclaimer at the end of this document.
Situation
Issue
In a Kubernetes cluster, running on AWS EC2 instances, with the AWS cloudprovider configured, labels and taints for a node are reset when the EC2 instance is rebooted.
Pre-requisites
- Kubernetes version lower than v1.12.0
- Cluster running on AWS EC2 instances, with the AWS cloudprovider configured
Root Cause
This behaviour is caused by the AWS cloudprovider in Kubernetes versions prior to v1.12.0, in which a stopped EC2 instance is deleted from the Kubernetes cluster, and then re-created when started again. As a result of this deletion and re-creation labels and taints on the node are lost during the reboot. Details of the issue and fix can be found in Kubernetes Pull Request #66835.
Resolution
In order to resolve this issue, the cluster should be upgraded to Kubernetes version v1.12.0 or above.
For clusters provisioned via the RKE CLI, users can upgrade the cluster to a Kubernetes version of v1.12.0 or higher with RKE v0.1.10 or above.
For clusters provisioned via Rancher, users can upgrade the cluster to a Kubernetes version of v1.12.6 or higher with Rancher v2.1.7 or above.
Disclaimer
This Support Knowledgebase provides a valuable tool for SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented "AS IS" WITHOUT WARRANTY OF ANY KIND.
- Document ID:000020207
- Creation Date: 06-May-2021
- Modified Date:06-May-2021
-
- SUSE Rancher
For questions or concerns with the SUSE Knowledgebase please contact: tidfeedback[at]suse.com