Reputation: 1325
I'm trying to connect elasticsearch to logstash on a centralized logstash aggregator
I'm running the logstash web interface over port 80 with kibana.
This is the command I'm using to start logstash :
/usr/bin/java -jar /etc/alternatives/logstashhome/logstash.jar agent -f /etc/logstash/logstash.conf web --port 80
This is the conf I am using:
[root@logstash:~] #cat /etc/logstash/logstash.conf
input { redis { host => "my-ip-here"
type => "redis-input"
data_type => "list"
key => "logstash" }
}
output {
stdout { }
elasticsearch{
type => "all"
embedded => false
host => "my-ip-here"
port => "9300"
cluster => "jf"
node_name => "logstash"
}
}
And it looks as if I am receiving data from the logstash agent (installed on another host). I see log entries streaming by after I start logstash via init script.
2013-10-31T02:51:53.916+0000 beta Oct 30 22:51:53 49eb8f3e-a2c1-4c12-a41f-42dbe635a9f0 sshd[23324]: Connection closed by xx.xx.xx.xx
2013-10-31T02:52:13.002+0000 beta Oct 30 22:52:12 49eb8f3e-a2c1-4c12-a41f-42dbe635a9f0 proftpd[23403]: xx.xx.xx.xx (xx.xx.xx.xx[xx.xx.xx.xx]) - FTP session opened.
2013-10-31T02:52:13.002+0000 beta Oct 30 22:52:12 49eb8f3e-a2c1-4c12-a41f-42dbe635a9f0 proftpd[23403]: xx.xx.xx.xx (xx.xx.xx.xx[xx.xx.xx.xx]) - FTP session closed.
2013-10-31T02:52:30.080+0000 beta Oct 30 22:52:29 49eb8f3e-a2c1-4c12-a41f-42dbe635a9f0 xinetd[1757]: START: nrpe pid=23405 from=xx.xx.xx.xx
2013-10-31T02:52:30.081+0000 beta Oct 30 22:52:29 49eb8f3e-a2c1-4c12-a41f-42dbe635a9f0 xinetd[1757]: EXIT: nrpe status=0 pid=23405 duration=0(sec)
I can see my nagios server connecting to the beta host ( beta is the external host with the logstash agent installed and running) and some FTP sessions (not that I'm in love FTP, but hey what can ya do?)
Yet when I point my browser to the logstash server I see this message:
Error No index found at http://logstash.mydomain.com:9200/_all/_mapping. Please create at least one index.If you're using a proxy ensure it is configured correctly.1 alert(s)
This is my cluster setting in elasticsearch.yaml
grep -i cluster /etc/elasticsearch/elasticsearch.yml | grep jf
cluster.name: jf
My host in elasticsearch.yaml grep -i host /etc/elasticsearch/elasticsearch.yml
network.bind_host: xxx.xx.xx.xxx # <- my logstash ip
I did try to add an index using the following curl:
[root@logstash:~] #curl -PUT http://logstash.mydomain.com:9200/_template/logstash_per_index
But when I reload the page I get the same error message. A bit stuck at this point. I'd appreciate any advice anyone may have!
Thanks!
Upvotes: 1
Views: 11322
Reputation: 391
I was also having similar issue, elasticsearch was running is different port and kibana was accessing it in 9200 port, which is mentioned in ./vendor/kibana/config.js file inside logstash home folder.
Upvotes: 0
Reputation: 896
you should let default configuration by removing this line
port => "9300"
Upvotes: 0
Reputation: 235
What is the output of logstash? (meaning like log-file)
The version of the logstash-embedded elasticsearch must match your standalone-version. e.g. logstash 1.3 uses elasticsearch 0.90. logstash 1.4 uses elasticsearch 1.0
So either you have to take care to use the right elasticsearch-version or use elasticsearch_http as output (with port 9200) to use the REST-API.
Upvotes: 0
Reputation: 11
You can check this chrome plugin once.
https://chrome.google.com/webstore/detail/sense/doinijnbnggojdlcjifpdckfokbbfpbo?hl=en
It's a JSON aware developer tool to ElasticSearch.Also after creating index,clear the browser cache,close the browser and retest.
Upvotes: 0
Reputation: 11
try to execute this
curl -XPUT http://127.0.0.1:9200/test/item/1 -d '{"name":"addhe warman", "description": "White hat hacker."}'
what was it because your elasticsearch is empty try to fill it with sample data and then find out the real problem is. good luck
Upvotes: 1