SUSE Support

Here When You Need Us

How to increase the log level for Canal components in a Rancher Kubernetes Engine (RKE) or Rancher v2.x provisioned Kubernetes cluster

This document (000020075) is provided subject to the disclaimer at the end of this document.

Situation

Task

During network troubleshooting it may be useful to increase the log level of the Canal components. This article details how to set verbose debug-level Canal component logging, in Rancher Kubernetes Engine (RKE) CLI or Rancher v2.x provisioned Kubernetes clusters.

Pre-requisites

  • A Rancher Kubernetes Engine (RKE) CLI or Rancher v2.x provisioned Kubernetes cluster with the Canal Network Provider

Resolution

N.B. As these instructions involve editing the Canal DaemonSet directly, the change will not persist cluster update events, i.e. invocations of rke up for RKE CLI provisioned clusters, or changes to the cluster configuration for a Rancher provisioned cluster. As a result cluster updates should be avoided whilst collecting the debug level logs for troubleshooting.

Via the Rancher UI

For a Rancher v2.x managed cluster, the Canal component log level can be adjusted via the Rancher UI, per the following process:

  1. Navigate to the System project of the relevant cluster within the Rancher UI.
  2. Locate the canal DaemonSet workload within the kube-system namespace, click the vertical elipses () and select Edit.
  3. Click to Edit the calico-node container.
  4. Add CALICO_STARTUP_LOGLEVEL = DEBUG in the Environment Variables section, click Save.
          containers:
            - env:
                - name: DATASTORE_TYPE
                  value: kubernetes
                - name: USE_POD_CIDR
                  value: 'true'
                - name: WAIT_FOR_DATASTORE
                  value: 'true'
                - name: NODENAME
                  valueFrom:
                    fieldRef:
                      apiVersion: v1
                      fieldPath: spec.nodeName
                - name: CALICO_NETWORKING_BACKEND
                  value: none
                - name: CLUSTER_TYPE
                  value: k8s,canal
                - name: FELIX_IPTABLESREFRESHINTERVAL
                  value: '60'
                - name: IP
                - name: CALICO_DISABLE_FILE_LOGGING
                  value: 'true'
                - name: FELIX_DEFAULTENDPOINTTOHOSTACTION
                  value: ACCEPT
                - name: FELIX_IPV6SUPPORT
                  value: 'false'
                - name: FELIX_LOGFILEPATH
                  value: none
                - name: FELIX_LOGSEVERITYSYS
                - name: FELIX_LOGSEVERITYSCREEN
                  value: Warning
                - name: FELIX_HEALTHENABLED
                  value: 'true'
                - name: FELIX_IPTABLESBACKEND
                  value: auto
                - name: CALICO_STARTUP_LOGLEVEL
                  value: DEBUG
              envFrom:
                - configMapRef:
                    name: kubernetes-services-endpoint
                    optional: true
              image: rancher/mirrored-calico-node:v3.22.0
              imagePullPolicy: IfNotPresent
              lifecycle:
                preStop:
                  exec:
                    command:
                      - /bin/calico-node
                      - '-shutdown'
              livenessProbe:
                exec:
                  command:
                    - /bin/calico-node
                    - '-felix-live'
                failureThreshold: 6
                initialDelaySeconds: 10
                periodSeconds: 10
                successThreshold: 1
                timeoutSeconds: 10
              name: calico-node
  5. Click Edit Yaml for the canal DaemonSet again.
  6. This time click Edit on the kube-flannel container.
  7. In the Command section add --v=10 to the Entrypoint e.g.: /opt/bin/flanneld --ip-masq --kube-subnet-mgr --v=10, and click Save.
            - command:
                - /opt/bin/flanneld
                - '--ip-masq'
                - '--kube-subnet-mgr'
                - '--v=10'
    

     
Via kubectl

With a Kube Config file sourced for the relevant cluster, for a user with permission to edit the System project, the Canal component log level can be adjusted via kubectl, per the following process:

  1. Run kubectl -n kube-system edit daemonset canal.
  2. In the env definition for the calico-node container add an environment variable with the name CALICO_STARTUP_LOGLEVEL and value DEBUG, e.g.:
    [...]
          containers:
          - env:
            [...]
            - name: CALICO_STARTUP_LOGLEVEL
              value: DEBUG
    [...]
  3. In the command definition for the kube-flannel container add --v=10 to the command, e.g.:
    [...]
          - commmand:
            - /opt/bin/flanneld
            - --ip-masq
            - --kube-subnet-mgr
            - --v=10
    [...]
  4. Save the file.
After Setting up the Debug log level, it can be checked viewing the 'Canal' pod logs:
2024-07-10 07:11:15.447 [INFO][9] startup/startup.go 425: Early log level set to debug
2024-07-10 07:11:15.447 [DEBUG][9] startup/load.go 124: No kubeconfig file at default path, leaving blank.
2024-07-10 07:11:15.447 [DEBUG][9] startup/client.go 30: Using datastore type 'kubernetes'
2024-07-10 07:11:15.450 [DEBUG][9] startup/k8s.go 628: Performing 'Get' for Node(foo) 
2024-07-10 07:11:15.450 [DEBUG][9] startup/node.go 118: Received Get request on Node type
2024-07-10 07:11:15.483 [DEBUG][9] startup/k8s.go 98: Created k8s clientSet: &{DiscoveryClient:0xc000c38ea0 admissionregistrationV1:0xc000609ad0 admissionregistrationV1beta1:0xc000609b30 internalV1alpha1:0xc000609b90 appsV1:0xc000609bf0 appsV1beta1:0xc000609c50 appsV1beta2:0xc000609cb0 authenticationV1:0xc000609d10 authenticationV1beta1:0xc000609d70 authorizationV1:0xc000609dd0 authorizationV1beta1:0xc000609e30 autoscalingV1:0xc000609f40 autoscalingV2beta1:0xc000609fc0 autoscalingV2beta2:0xc000410150 batchV1:0xc000410380 batchV1beta1:0xc0004105c0 certificatesV1:0xc0004107e0 certificatesV1beta1:0xc000410940 coordinationV1beta1:0xc000410a40 coordinationV1:0xc000410aa0 coreV1:0xc000410b00 discoveryV1:0xc000410b90 discoveryV1beta1:0xc000410bf0 eventsV1:0xc000410c50 eventsV1beta1:0xc000410cb0 extensionsV1beta1:0xc000410d30 flowcontrolV1alpha1:0xc000410d90 flowcontrolV1beta1:0xc000410e10 networkingV1:0xc000410ed0 networkingV1beta1:0xc000410f30 nodeV1:0xc000410fa0 nodeV1alpha1:0xc000411000 nodeV1beta1:0xc000411060 policyV1:0xc0004110e0 policyV1beta1:0xc000411140 rbacV1:0xc0004111c0 rbacV1beta1:0xc000411220 rbacV1alpha1:0xc000411290 schedulingV1alpha1:0xc0004112f0 schedulingV1beta1:0xc000411360 schedulingV1:0xc0004113c0 storageV1beta1:0xc000411420 storageV1:0xc000411480 storageV1alpha1:0xc0004114f0}
2024-07-10 07:11:15.483 [DEBUG][9] startup/k8s.go 628: Performing 'Get' for ClusterInformation(default) 
2024-07-10 07:11:15.483 [DEBUG][9] startup/customresource.go 205: Get custom Kubernetes resource Key=ClusterInformation(default) Resource="ClusterInformations" Revision=""
2024-07-10 07:11:15.483 [DEBUG][9] startup/customresource.go 216: Get custom Kubernetes resource by name Key=ClusterInformation(default) Name="default" Namespace="" Resource="ClusterInformations" Revision=""
2024-07-10 07:11:15.487 [DEBUG][9] startup/migrate.go 820: major version is already >= 3: v3.22.0

Further reading

Disclaimer

This Support Knowledgebase provides a valuable tool for SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented "AS IS" WITHOUT WARRANTY OF ANY KIND.

  • Document ID:000020075
  • Creation Date: 06-May-2021
  • Modified Date:12-Jul-2024
    • SUSE Rancher

< Back to Support Search

For questions or concerns with the SUSE Knowledgebase please contact: tidfeedback[at]suse.com

tick icon

SUSE Support Forums

Get your questions answered by experienced Sys Ops or interact with other SUSE community experts.

tick icon

Support Resources

Learn how to get the most from the technical support you receive with your SUSE Subscription, Premium Support, Academic Program, or Partner Program.

tick icon

Open an Incident

Open an incident with SUSE Technical Support, manage your subscriptions, download patches, or manage user access.