Reputation: 526
Goal: Load in Kibana saved object when starting Elastic Stack from Docker-Compose.
I have an "export.json" object from Kibana, containing my default index, 10+ visualizations, and dashboard setup. I am able to successfully start the Elastic Stack from Docker-Compose and manually load the object, but it does not load it automatically upon container start.
I need to find a method to load this object programmatically, as I will have multiple objects that can be deployed depending on the containers running. In other words (using pseudo-config):
test1
container a # code to run
container b # code send metrics
test1_export.json # display metrics
test2
container c # code to run
container d # code send metrics
test2_export.json # display metrics
Using these definitions:
> run test1
Would execute two Docker-Compose files, one to run code and the other to run the Elastic Stack. The latter would also need to load in the export.json object.
This method was eluded to from this config file:
Kibana uses an index in Elasticsearch to store saved searches, visualizations and dashboards. Kibana creates a new index if the index doesn't already exist.
Also from here:
Kibana keeps all its state in Elasticsearch, specifically in the .kibana index. There should be no need to preserve any state on the local file system of the Kibana containers.
However, Kibana starts from scratch whenever I restart my containers.
Taken directly from this Kibana user guide, I attempted to store the kibana.yml config file within the container. However, this syntax breaks Kibana upon start. Here are the two files:
kibana.yml
kibana.index: ".kibana"
docker-compose.yml
version: '2'
services:
elasticsearch:
container_name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:5.5.1
ports:
- "9200:9200"
expose:
- 9200
restart: unless-stopped
environment:
- "transport.host=127.0.0.1"
- "xpack.security.enabled=false"
volumes:
- "./elasticsearch/data/:/usr/share/elasticsearch/data"
logstash:
container_name: logstash
build: ../../modules/logstash
image: logstash:5.5.1
kibana:
container_name: kibana
image: docker.elastic.co/kibana/kibana:5.5.1
restart: unless-stopped
ports:
- "5601:5601"
depends_on:
- elasticsearch
environment:
- "ELASTICSEARCH_URL: http://elasticsearch:9200"
expose:
- 5601
volumes:
- "./kibana.yml:/usr/share/kibana/config/kibana.yml"
My questions:
My environment is Linux/Centos7.
Upvotes: 4
Views: 3912
Reputation: 10859
This is a common issue with Docker. One solution are short lived containers to configure everything the way you want it; for example running a shell script.
Take a look at Elastic's stack demo, where they are just using this approach.
docker-compose.yml:
# Run a short-lived container to set up Logstash.
setup_logstash:
image: centos:7
volumes: ['./scripts/setup-logstash.sh:/usr/local/bin/setup-logstash.sh:ro']
# The script may have CR/LF line endings if using Docker for Windows, so
# make sure that they don't confuse Bash.
command: ['/bin/bash', '-c', 'cat /usr/local/bin/setup-logstash.sh | tr -d "\r" | bash']
environment: ['ELASTIC_PASSWORD=${ELASTIC_PASSWORD}']
networks: ['stack']
depends_on: ['elasticsearch']
setup-logstash.sh:
#!/bin/bash
set -euo pipefail
es_url=http://elastic:${ELASTIC_PASSWORD}@elasticsearch:9200
# Wait for Elasticsearch to start up before doing anything.
until curl -s $es_url -o /dev/null; do
sleep 1
done
# Set the password for the logstash_system user.
# REF: https://www.elastic.co/guide/en/x-pack/6.0/setting-up-authentication.html#set-built-in-user-passwords
until curl -s -H 'Content-Type:application/json' \
-XPUT $es_url/_xpack/security/user/logstash_system/_password \
-d "{\"password\": \"${ELASTIC_PASSWORD}\"}"
do
sleep 2
echo Retrying...
done
Run whatever cURL command you need to configure Kibana the way you want it.
Upvotes: 4
Reputation: 119
I would recommend taking ELK to this site as a guide. I have installed this site and I am very satisfied.
Upvotes: 1