krisdigitx
krisdigitx

Reputation: 7136

logstash org.elasticsearch.discovery.MasterNotDiscoveredException error

I have installed logstash 1.1.13 with elasticcsearch-0.20.6 the below config for logstash.conf

input {
tcp {
port => 524
type => rsyslog
}
udp {
port => 524
type => rsyslog
}
}
filter {
grok {
type => "rsyslog"
pattern => [ "<%{POSINT:syslog_pri}>%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{PROG:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" ]
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{@source_host}" ]
}
syslog_pri {
type => "rsyslog"
}
date {
type => "rsyslog"
syslog_timestamp => [ "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
mutate {
type => "rsyslog"
exclude_tags => "_grokparsefailure"
replace => [ "@source_host", "%{syslog_hostname}" ]
replace => [ "@message", "%{syslog_message}" ]
}
mutate {
type => "rsyslog"
remove => [ "syslog_hostname", "syslog_message", "syslog_timestamp" ]
}
}

output {
elasticsearch {
 host => "127.0.0.1"
 port => 9300
 node_name => "sysloG33r-1"
 bind_host => "localhost"
 }
}

and

elasticsearch.yml

cluster: 
    name: syslogcluster
node:
    name: "sysloG33r-1"
path:
    data: /var/lib/elasticsearch
path:
    logs: /var/log/elasticsearch
network:
    host: "0.0.0.0"

and started logstash with command

    [root@clane elasticsearch]# java -jar /usr/local/bin/logstash/bin/logstash.jar agent -f /etc/logstash/logstash.conf
Using experimental plugin 'syslog_pri'. This plugin is untested and may change in the future. For more information about plugin statuses, see http://logstash.net/docs/1.1.13/plugin-status  {:level=>:warn}
date: You used a deprecated setting 'syslog_timestamp => ["MMM d HH:mm:ss", "MMM dd HH:mm:ss"]'. You should use 'match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]' {:level=>:warn}
PORT SETTINGS 127.0.0.1:9300
log4j, [2013-06-21T14:40:08.013]  WARN: org.elasticsearch.discovery: [sysloG33r-1] waited for 30s and no initial state was set by the discovery
Failed to index an event, will retry {:exception=>org.elasticsearch.discovery.MasterNotDiscoveredException: waited for [1m], :event=>{"@source"=>"tcp://10.66.59.35:34662/", "@tags"=>[], "@fields"=>{"syslog_pri"=>["78"], "syslog_program"=>["crond"], "syslog_pid"=>["6511"], "received_at"=>["2013-06-21T13:40:01.845Z"], "received_from"=>["10.66.59.35"], "syslog_severity_code"=>6, "syslog_facility_code"=>9, "syslog_facility"=>"clock", "syslog_severity"=>"informational"}, "@timestamp"=>"2013-06-21T12:40:01.000Z", "@source_host"=>"kent", "@source_path"=>"/", "@message"=>"(root) CMD (/opt/bin/firewall-state.sh)", "@type"=>"rsyslog"}, :level=>:warn}

and elasticsearch

/usr/local/bin/elasticsearch start

I can see all the correct java ports for elasticsearch(9200,9300) and logstash(524)

tcp        0      0 :::524                      :::*                        LISTEN      12557/java          
tcp        0      0 :::9200                     :::*                        LISTEN      10782/java          
tcp        0      0 :::9300                     :::*                        LISTEN      10782/java          
tcp        0      0 ::ffff:127.0.0.1:9301       :::*                        LISTEN      12557/java          
udp        0      0 :::524                      :::*                                    12557/java          
udp        0      0 :::54328                    :::*                                    10782/java 

however i see this error on logstash, any ideas?

Failed to index an event, will retry {:exception=>org.elasticsearch.discovery.MasterNotDiscoveredException: waited for [1m], :event=>{"@source"=>"tcp://10.66.59.35:33598/", "@tags"=>[], "@fields"=>{"syslog_pri"=>["78"], "syslog_program"=>["crond"], "syslog_pid"=>["12983"], "received_at"=>["2013-06-21T12:07:01.541Z"], "received_from"=>["10.66.59.35"], "syslog_severity_code"=>6, "syslog_facility_code"=>9, "syslog_facility"=>"clock", "syslog_severity"=>"informational"}, "@timestamp"=>"2013-06-21T11:07:01.000Z", "@source_host"=>"kent", "@source_path"=>"/", "@message"=>"(root) CMD (/opt/bin/firewall-state.sh)", "@type"=>"rsyslog"}, :level=>:warn}

Upvotes: 3

Views: 23308

Answers (3)

Ysak
Ysak

Reputation: 2765

I came across same kind of issue and fixed by adding cluster option in the elasticsearch conf in logstash. Since you have modified the cluster name in elasticsearch.yml, the logstash client will be not able to find the cluster using the default value.

Try doing this also

Upvotes: 0

Hadrien
Hadrien

Reputation: 59

I had a similar issue, and it came from my ip configuration. In a nutshell, check that you have only one ip address on the logstash host. If not, it can choose the wrong one.

Posted the same answer here: Logstash with Elasticsearch

Upvotes: 1

jgoldschrafe
jgoldschrafe

Reputation: 264

I'm going to assume you've checked the obvious things, like "is ElasticSearch running?" and "can I open a TCP connection to port 9300 on localhost?"

Even though you're using a host parameter in your elasticsearch output, what's probably happening is that the ElasticSearch client in Logstash is trying to discover cluster members by multicast (which is how a new install is typically configured by default), and is failing. This is common on EC2, as well as many other environments where firewall configurations may interfere with multicast discovery. If this is the only member in your cluster, setting the following in your elasticsearch.yml should do the trick:

discovery:
  zen:
    ping:
      multicast:
        enabled: false
      unicast:
        hosts: <your_ip>[9300-9400]

On AWS, there's also an EC2 discovery plugin that will clear this right up for you.

This question really belongs on Server Fault rather than Stack Overflow, by the way.

Upvotes: 8

Related Questions