SUSE Support

Here When You Need Us

rancher-logging receiving exception Kafka::MessageSizeTooLarge

This document (000021530) is provided subject to the disclaimer at the end of this document.

Environment

Rancher 2.5.x or higher with rancher-logging v2
 

Situation

Receiving the error message Kafka::MessageSizeTooLarge after configuring a rancher-logging output to send logs to kafka:

2024-08-02 10:14:49 +0000 [warn]: #0 [flow:iiab1:test3-iiab1:clusteroutput:cattle-logging-system:iiabe1-kafka-co] Send exception occurred: Kafka::MessageSizeTooLarge

2024-08-02 10:14:49 +0000 [warn]: #0 [flow:iiab1:test3-iiab1:clusteroutput:cattle-logging-system:iiabe1-kafka-co] Exception Backtrace : /usr/lib/ruby/gems/2.7.0/gems/ruby-kafka-1.5.0/lib/kafka/protocol.rb:160:in `handle_error'

2024-08-02 10:14:49 +0000 [info]: #0 [flow:iiab1:test3-iiab1:clusteroutput:cattle-logging-system:iiabe1-kafka-co] initialized kafka producer: fluentd

2024-08-02 10:14:49 +0000 [warn]: #0 [flow:iiab1:test3-iiab1:clusteroutput:cattle-logging-system:iiabe1-kafka-co] failed to flush the buffer. retry_times=18 next_retry_time=2024-08-05 06:42:49 +0000 chunk="61e2ecae3d68e0ed7479b6c2432aee7c" error_class=Kafka::MessageSizeTooLarge error="Kafka::MessageSizeTooLarge"

2024-08-02 13:33:37 +0000 [warn]: #0 [flow:iiab1:test2-iiab1:clusteroutput:cattle-logging-system:iiabe1-kafka-co] Send exception occurred: Kafka::MessageSizeTooLarge

2024-08-02 13:33:37 +0000 [warn]: #0 [flow:iiab1:test2-iiab1:clusteroutput:cattle-logging-system:iiabe1-kafka-co] Exception Backtrace : /usr/lib/ruby/gems/2.7.0/gems/ruby-kafka-1.5.0/lib/kafka/protocol.rb:160:in `handle_error'

 

Resolution

There are ways to solve this error:
  • Kafka: the best option is to increase the message.max.bytes to a higher value than 8M
  • rancher-logging: If there is no way to change kafka configuration then the chunk_limit_size can be lowered to less than 1M in the output buffer configuration. Below is an example of a kafka output where the chunk_limit_size has been lowered to 1MB
apiVersion: logging.banzaicloud.io/v1beta1
kind: Output
metadata:
  name: kafka-output-example
Spec:
  kafka:
    brokers: kafka-headless.kafka.svc.cluster.local:29092
    default_topic: topic
    sasl_over_ssl: false
    format:
      type: json
    buffer:
      chunk_limit_size: 1MB
      tags: topic
      timekey: 1m
      timekey_wait: 30s
      timekey_use_utc: true

Cause

This exception is caused by the fact that fluentd on rancher-logging is configured to send chunks of 8M when kafka has a message limit of 1M by default.

Additional Information

https://www.conduktor.io/kafka/how-to-send-large-messages-in-apache-kafka/

https://kube-logging.dev/4.0/docs/configuration/plugins/outputs/buffer/

https://kube-logging.dev/4.0/docs/configuration/plugins/outputs/kafka/

Disclaimer

This Support Knowledgebase provides a valuable tool for SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented "AS IS" WITHOUT WARRANTY OF ANY KIND.

  • Document ID:000021530
  • Creation Date: 18-Aug-2024
  • Modified Date:23-Aug-2024
    • SUSE Rancher

< Back to Support Search

For questions or concerns with the SUSE Knowledgebase please contact: tidfeedback[at]suse.com

tick icon

SUSE Support Forums

Get your questions answered by experienced Sys Ops or interact with other SUSE community experts.

tick icon

Support Resources

Learn how to get the most from the technical support you receive with your SUSE Subscription, Premium Support, Academic Program, or Partner Program.

tick icon

Open an Incident

Open an incident with SUSE Technical Support, manage your subscriptions, download patches, or manage user access.