Reputation: 2489
I have just installed Kibana 7.3 on RHEL 8. The Kibana service is active (running).
I receive Kibana server is not ready yet
message when i curl to http://localhost:5601.
My Elasticsearch instance is on another server and it is responding with succes to my requests. I have updated the kibana.yml with that
elasticsearch.hosts:["http://EXTERNAL-IP-ADDRESS-OF-ES:9200"]
i can reach to elasticsearch from the internet with response:
{
"name" : "ip-172-31-21-240.ec2.internal",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "y4UjlddiQimGRh29TVZoeA",
"version" : {
"number" : "7.3.1",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "4749ba6",
"build_date" : "2019-08-19T20:19:25.651794Z",
"build_snapshot" : false,
"lucene_version" : "8.1.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
The result of the sudo systemctl status kibana
:
● kibana.service - Kibana
Loaded: loaded (/etc/systemd/system/kibana.service; enabled; vendor preset: disabled)
Active: active (running) since Thu 2019-09-19 12:22:34 UTC; 24min ago
Main PID: 4912 (node)
Tasks: 21 (limit: 4998)
Memory: 368.8M
CGroup: /system.slice/kibana.service
└─4912 /usr/share/kibana/bin/../node/bin/node --no-warnings --max-http-header-size>
Sep 19 12:46:42 ip-172-31-88-39.ec2.internal kibana[4912]: {"type":"log","@timestamp":"2019-0>
Sep 19 12:46:42 ip-172-31-88-39.ec2.internal kibana[4912]: {"type":"log","@timestamp":"2019-0>
Sep 19 12:46:43 ip-172-31-88-39.ec2.internal kibana[4912]: {"type":"log","@timestamp":"2019-0>
Sep 19 12:46:43 ip-172-31-88-39.ec2.internal kibana[4912]: {"type":"log","@timestamp":"2019-0>
Sep 19 12:46:43 ip-172-31-88-39.ec2.internal kibana[4912]: {"type":"log","@timestamp":"2019-0>
Sep 19 12:46:44 ip-172-31-88-39.ec2.internal kibana[4912]: {"type":"log","@timestamp":"2019-0
the result of "sudo journalctl --unit kibana"
Sep 19 06:03:53 ip-172-31-88-39.ec2.internal kibana[1356]: {"type":"log","@timestamp":"2019-09-19T06:03:53Z","tags":["warning","elasticsearch","admin"],"pid":1356,"message":"Unable to revive >
Sep 19 06:03:53 ip-172-31-88-39.ec2.internal kibana[1356]: {"type":"log","@timestamp":"2019-09-19T06:03:53Z","tags":["warning","elasticsearch","admin"],"pid":1356,"message":"No living connect>
Sep 19 06:03:53 ip-172-31-88-39.ec2.internal kibana[1356]: {"type":"log","@timestamp":"2019-09-19T06:03:53Z","tags":["warning","task_manager"],"pid":1356,"message":"PollError No Living connec>
Sep 19 06:03:53 ip-172-31-88-39.ec2.internal kibana[1356]: {"type":"log","@timestamp":"2019-09-19T06:03:53Z","tags":["warning","elasticsearch","admin"],"pid":1356,"message":"Unable to revive >
Sep 19 06:03:53 ip-172-31-88-39.ec2.internal kibana[1356]: {"type":"log","@timestamp":"2019-09-19T06:03:53Z","tags":["warning","elasticsearch","admin"],"pid":1356,"message":"No living connect>
Do you have any idea where the problem is?
Upvotes: 47
Views: 217726
Reputation: 1475
In my case there was explicit error about incompatibility between ElasticS and Kibana in /etc/kibana/kibana.log
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.6.0"},"@timestamp":"2023-04-14T01:21:55.398+02:00","message":"This version of Kibana (v8.7.0) is incompatible with the following Elasticsearch nodes in your cluster: v7.17.9 @ 192.168.0.28:9200 (192.168.0.28)","log":{"level":"ERROR","logger":"elasticsearch-service"},"process":{"pid":118729},"trace":{"id":"66aaf063bef7d7a991c27883f4ad7e4a"},"transaction":{"id":"8f10d4e6d10975d0"}}
https://www.elastic.co/support/matrix#matrix_compatibility
Upvotes: 0
Reputation: 768
Go to Kibana directory and find the kibana.yml file in config folder. Change the property as elasticsearch.hosts: ['https://localhost:9200']
. Some IP address is written there so we are changing it to localhost
.
Upvotes: 3
Reputation: 1831
for me the root cause was that I don't have enough disk space, the Kibana logs have this error
Action failed with '[index_not_green_timeout] Timeout waiting for the status of the [.kibana_task_manager_8.5.1_001] index to become 'green' Refer to https://www.elastic.co/guide/en/kibana/8.5/resolve-migrations-failures.html#_repeated_time_out_requests_that_eventually_fail for information on how to resolve the issue.
I went to link mentioned in the error https://www.elastic.co/guide/en/kibana/8.5/resolve-migrations-failures.html#_repeated_time_out_requests_that_eventually_fail
and run the following request https://localhost:9200/_cluster/allocation/explain
Ther response contained this
"deciders": [
{
"decider": "disk_threshold",
"decision": "NO",
"explanation": "the node is above the low watermark cluster setting [cluster.routing.allocation.disk.watermark.low=85%], having less than the minimum required [21.1gb] free space, actual free: [17.1gb], actual used: [87.8%]"
}
Upvotes: 2
Reputation: 1969
In my case the server was updated and SELinux was blocking the localhost:9200 connection with a connection refused message.
You can check if it's enabled in /etc/selinux/config
.
Upvotes: 0
Reputation: 101
The issue was kibana was unable to access elasticsearch locally. I think that you have enabled xpack.security plugin at elasticsearch.yml by adding a new line :
xpack.security.enabled : true
if so you need to uncomment these two lines on kibana.yml:
elasticsearch.username = kibana
elasticsearch.password = your-password
after that save the changes
and restart kibana service : sudo systemctl restart kibana.service
Upvotes: 10
Reputation: 111
Upvotes: 0
Reputation: 2298
One of the issue might be you are running Kibana version which is not compatible with elasticsearch.
Check the bottom of log file using sudo tail /var/log/kibana/kibana.log
I am using Ubuntu. I can see below message in the log file:
{"type":"log","@timestamp":"2021-11-02T15:46:07+04:00","tags":["error","savedobjects-service"],"pid":3801445,"message":"This version of Kibana (v7.15.1) is incompatible with the following Elasticsearch nodes in your cluster: v7.9.3 @ localhost/127.0.0.1:9200 (127.0.0.1)"}
Now you need to install the same version of Kibana as elasticsearch. For example you can see in my system elasticsearch 7.9.3 was installed but Kibana 7.15.1 was installed.
How I have resolved this?
sudo apt-get remove kibana
wget https://artifacts.elastic.co/downloads/kibana/kibana-7.9.3-amd64.deb
shasum -a 512 kibana-7.9.3-amd64.deb
sudo dpkg -i kibana-7.9.3-amd64.deb
sudo service kibana start
curl --request DELETE 'http://localhost:9200/.kibana*'
Modify /etc/kibana/kibana.yml file and un-comment below lines:
server.port: 5601
server.host: "localhost"
elasticsearch.hosts: ["http://localhost:9200"]
And open below url in your browser: http://localhost:5601/app/home
Similarly you can check you elasticsearch version and install same version of kibana.
Upvotes: 1
Reputation: 3643
I faced the same issue once when I upgraded Elasticsearch from v6 to v7.
Deleting .kibana*
indexes fixed the problem:
curl --request DELETE 'http://elastic-search-host:9200/.kibana*'
Upvotes: 30
Reputation: 309
In my case, below changes fixed the problem:
/etc/elasticsearch/elasticsearch.yml
uncomment:
#network.host: localhost
And in
/etc/kibana/kibana.yml
uncomment
#elasticsearch.hosts: ["http://localhost:9200"]
Upvotes: 5
Reputation: 3350
There can be multiple reasons for this. Few things to try
.kibana*
indices as Karthik pointed aboveIf they don't work, turn on verbose logging from kibana.yml
and restart kibana to get more insights into what may be the cause of this.
Upvotes: 3
Reputation: 4211
exec that
curl -XDELETE http://localhost:9200/*kibana*
and restart kibana service
service kibana restart
Upvotes: 4
Reputation: 1306
My scenario ended up with the same issue but resulted from using the official Docker containers for both Elasticsearch and Kibana. In particular, the documentation on the Kibana image incorrectly assumes you will have at least one piece of critical knowledge.
In my case, the solution was to be sure that:
:elasticsearch
tag, not the version tag.I had made the mistake of using the Elasticsearch container version tag. Here is the corrected format of the docker run
command I needed:
docker run -d --name {Kibana container name to set} --net {network name known to Elasticsearch container} --link {name of Elasticsearch container}:elasticsearch -p 5601:5601 kibana:7.10.1
Considering the command above, if we substitute...
lookeyHere
as the Kibana container namemyNet
as the network namemyPersistence
as the Elasticsearch container nameThen we get the following:
docker run -d --name lookyHere --net myNet --link myPersistence:elasticsearch -p 5601:5601 kibana:7.10.1
That :elasticsearch
right there is critical to getting this working as it sets the #elasticsearch.hosts
value in the /etc/kibana/kibana.yml
file... which you will not be able to easily modify if you are using the official Docker images. @user8832381's answer above gave me the direction I needed towards figuring this out.
Hopefully, this will save someone a few hours.
Upvotes: 0
Reputation: 12406
The reason may be in :
For Linux's docker hosts only. By default
virtual memory is not enough so run the next command as root sysctl -w vm.max_map_count=262144
So if you did not execute it, do it :
sysctl -w vm.max_map_count=262144
If it will help, to use it even after VM reloads, check please this comment : https://stackoverflow.com/a/50371108/1151741
Upvotes: 1
Reputation: 572
Refer the discussion on Kibana unabe to connect to elasticsearch on windows
Deleting the .kibana_task_manager_1 index on elasticsearcch solved the issue for me!
Upvotes: 3
Reputation: 621
Probably not the solution for this question
In my case the version from kibana and elasticsearch were not compatible
How i was using docker, i just recreated both but using the same version (7.5.1)
https://www.elastic.co/support/matrix#matrix_compatibility
Upvotes: 11
Reputation: 231
The error might be related to elastic.hosts
settings. The following steps and worked for me:
/etc/elasticsearch/elasticsearch.yml
file and check the setting on:#network.host: localhost
2.Open /etc/kibana/kibana.yml
file and check the setting and check:
#elasticsearch.hosts: ["http://localhost:9200"]
The issue was kibana was unable to access elasticsearch locally.
Upvotes: 23
Reputation: 2489
To overcome this incident, i have deleted and recreated the both servers. I have installed ES and Kibana 7.4 , also i have increased the VM size of ES server to from t1.micro to t2.small. All worked well. In the previous ES instance, the instance was sometimes stopping itself. the vm ram was 1GB consequently i had to limit the JVM heap size and maybe that's the reason the whole problem occured.
Upvotes: 0