Reputation: 1572
I am using logstash with elestic search and kibana for my mining my logs. It was working fine till yesterday but suddenly started giving following error which I am not able to understand -
No results There were no results because no indices were found that match your selected time span
logstash logs contain following info
{:timestamp=>"2013-12-19T17:32:47.612000+0530", :message=>"Using milestone 2 input plugin 'file'. This plugin should be stable, but if you see strange behavior, please let us know! For more inform$
{:timestamp=>"2013-12-19T17:32:47.728000+0530", :message=>"You are using a deprecated config setting \"type\" set in multiline. Deprecated settings will continue to work, but are scheduled for rem$
{:timestamp=>"2013-12-19T17:32:47.781000+0530", :message=>"You are using a deprecated config setting \"type\" set in grok. Deprecated settings will continue to work, but are scheduled for removal $
{:timestamp=>"2013-12-19T17:32:47.839000+0530", :message=>"You are using a deprecated config setting \"type\" set in date. Deprecated settings will continue to work, but are scheduled for removal $
Exception in thread "LogStash::Runner" org.jruby.exceptions.RaiseException: (EADDRINUSE) Address already in use - bind - Address already in use
at org.jruby.ext.socket.RubyTCPServer.initialize(org/jruby/ext/socket/RubyTCPServer.java:118)
at org.jruby.RubyIO.new(org/jruby/RubyIO.java:852)
at RUBY.initialize(jar:file:/u001/logparser/tools/logstash/logstash-1.3.1-flatjar.jar!/ftw/server.rb:50)
at org.jruby.RubyArray.each(org/jruby/RubyArray.java:1613)
at RUBY.initialize(jar:file:/u001/logparser/tools/logstash/logstash-1.3.1-flatjar.jar!/ftw/server.rb:46)
at org.jruby.RubyArray.each(org/jruby/RubyArray.java:1613)
at RUBY.initialize(jar:file:/u001/logparser/tools/logstash/logstash-1.3.1-flatjar.jar!/ftw/server.rb:34)
at RUBY.run(file:/u001/logparser/tools/logstash/logstash-1.3.1-flatjar.jar!/rack/handler/ftw.rb:94)
at RUBY.run(file:/u001/logparser/tools/logstash/logstash-1.3.1-flatjar.jar!/logstash/kibana.rb:101)
Upvotes: 2
Views: 10031
Reputation: 733
Coming to logstash response : The indices of elasticsearch path where you are going to save your data through logstash has been refusing to take data. Due to repeatedly hitting of same query again and again,its making the java application to hit same port at same instant of time. Hence its stopping logstash application and showing bind issue.
Coming to kibana response: check the mapping and indices name and pattern in elasticsearch where its trying to save data and kibana is trying to fetch data
if you have installed head plugin in elastic search you indices could be seen in start elasticsearch service
bin/elasticsearch -f
then from your browser
http://elasticsearchinstalledip:9201/_plugin/head
Upvotes: 1