Naveen R
Naveen R

Reputation: 9

failed to write data into buffer by buffer overflow action=:throw_exception

failed to write data into buffer by buffer overflow action=:throw_exception

Environment

Configuration

<source>
  @type tail
  path /var/log/containers/*.log
  pos_file /var/log/fluentd-containers.log.pos
  tag kubernetes.*
  read_from_head true
  <parse>
    @type multi_format
    <pattern>
      format json
      time_key @timestamp
      time_format %Y-%m-%dT%H:%M:%S.%NZ
    </pattern>
    <pattern>
      format /^(?<time>.+) (?<stream>stdout|stderr) [^ ]* (?<log>.*)$/
      time_format %Y-%m-%dT%H:%M:%S.%N%:z
    </pattern>
  </parse>
</source>

<filter kubernetes.**>
  @type kubernetes_metadata
</filter>

<match kubernetes.**>
  @type opensearch
  include_tag_key true
  host "opensearch domain"
  port "443"
  scheme https
  ssl_verify true
  ssl_version TLSv1_2
  index_name services_log
  include_timestamp true
  tag_key @log_name
  time_key @timestamp
  time_format %Y-%m-%dT%H:%M:%S.%NZ
  buffer_chunk_limit 2M
  buffer_queue_limit 32
  flush_interval 5s
  max_retry_wait 30
  disable_retry_limit
  num_threads 8
</match>

Error Log

2024-05-24 04:41:09 +0000 [warn]: #0 failed to write data into buffer by buffer overflow action=:throw_exception
2024-05-24 04:41:09 +0000 [warn]: #0 emit transaction failed: error_class=Fluent::Plugin::Buffer::BufferOverflowError error="buffer space has too many data" location="/fluentd/vendor/bundle/ruby/3.2.0/gems/fluentd-1.16.5/lib/fluent/plugin/buffer.rb:330:in `write'" tag="kubernetes.var.log.containers.service.log"

Upvotes: 0

Views: 243

Answers (1)

Mateo Gross
Mateo Gross

Reputation: 23

I encountered similar issues while using on premise Elasticsearch. I resolved them by adjusting the following parameters:

buffer_chunk_limit 2M
buffer_queue_limit 32
flush_interval 5s

Additionally, I removed the fluentd logs from the loop by including this line in my source directive:

exclude_path /var/log/containers/fluentd*.log

Upvotes: 0

Related Questions