Karn Kumar
Karn Kumar

Reputation: 8816

Errror for type and max_open_files on the logstash server's logs

I'm getting some annoying messages on my logstash Server in the logstash log file:

First looks like for

[2019-01-29T21:27:30,230][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"syslog-2019.01.29", :_type=>"doc", :routing=>nil}, #<LogStash::Event:0x7e88287a>], :response=>{"index"=>{"_index"=>"syslog-2019.01.29", "_type"=>"doc", "_id"=>"zsY5nWgB6AmJPdJO_omb", "status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"Rejecting mapping update to [syslog-2019.01.29] as the final mapping would have more than 1 type: [messages, doc]"}}}}

Second for 'max_open_files'

[2019-01-29T21:24:57,887][WARN ][filewatch.tailmode.processor] Reached open files limit: 4095, set by the 'max_open_files' option or default, files yet to open: 422

Is this max_open_files relates to elastic Server where it sends data.

I have increased the limit in /usr/lib/systemd/system/elasticsearch.service file and /etc/security/limits.conf but nothing changed.

My logstash conf file:

Older one:

[root@myelk04 ~]# cat /etc/logstash/conf.d/syslog.conf
input {
  file {
    path => [ "/data/SYSTEMS/*/messages.log" ]
    start_position => beginning
    sincedb_path => "/dev/null"
    type => "syslog"
  }
}

filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp } %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      remove_field => ["@version", "host", "message", "_type", "_index", "_score", "path"]
    }
    syslog_pri { }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
 }
}
}
output {
        if [type] == "syslog" {
        elasticsearch {
                hosts => "myelk01:9200"
                manage_template => false
                index => "syslog-%{+YYYY.MM.dd}"
                document_type => "messages"
  }
 }
}
[root@myelk04 ~]#

current one:

may be i just removed the document_type => "messages" ,Since it popping up this message and taking default as doc now.

[root@myelk04 ~]# cat /etc/logstash/conf.d/syslog.conf
input {
  file {
    path => [ "/data/SYSTEMS/*/messages.log" ]
    start_position => beginning
    sincedb_path => "/dev/null"
    type => "syslog"
  }
}

filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp } %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      remove_field => ["@version", "host", "message", "_type", "_index", "_score", "path"]
    }
    syslog_pri { }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
 }
}
}
output {
        if [type] == "syslog" {
        elasticsearch {
                hosts => "myelk01:9200"
                manage_template => false
                index => "syslog-%{+YYYY.MM.dd}"
  }
 }
}
[root@myelk04 ~]#

Upvotes: 0

Views: 1167

Answers (1)

ibexit
ibexit

Reputation: 3667

The first error says, that logstash is trying to update the mapping for a specific index. This update will add a new mapping for the type "doc" but there is already a mapping for "messages" present. This would result in two mappings in same index, what is not supported anymore. Please check the mapping for this index and the type of the documents you´re trying to index in your syslog-* indices. Maybe you have used the very same index already for some kind of documents with the type "message"?

The second error says, that the number of open files is reached. To increase it permanently, youl´ll need to follow this instructions (what you have partly applied already). Issue this changes not only on your elasticsearch server but also on the logstash host.

In order to apply this settings while the server is running you need to execute this command and restart the service:

sudo ulimit -n 65535

Upvotes: 2

Related Questions