Reputation: 565
I want to run elasticsearch and kibana with docker-compose. This is my docker-compose.yml which I run with docker-compose --env-file dev.env up
Docker Compose
version: '3.1'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:8.1.1
container_name: elasticsearch
environment:
- cluster.name=elasticsearch-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- discovery.type=single-node
- xpack.security.enrollment.enabled=true
- ELASTICSEARCH_USERNAME=${ELASTICSEARCH_USERNAME}
- ELASTICSEARCH_PASSWORD=${ELASTICSEARCH_PASSWORD}
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata:/usr/share/elasticsearch/data
ports:
- "9200:9200"
- "9300:9300"
networks:
- esnet
kibana:
image: docker.elastic.co/kibana/kibana:8.1.1
container_name: kibana
environment:
- ELASTICSEARCH_HOSTS=${ELASTICSEARCH_HOSTS}
- ELASTICSEARCH_USERNAME=${ELASTICSEARCH_USERNAME}
- ELASTICSEARCH_PASSWORD=${ELASTICSEARCH_PASSWORD}
- xpack.security.enabled=true
depends_on:
- elasticsearch
ports:
- "5601:5601"
networks:
- esnet
volumes:
esdata:
driver: local
postgres-data:
driver: local
networks:
esnet:
Stacktrace
Error: [config validation of [elasticsearch].username]: value of "elastic" is forbidden. This is a superuser account that cannot write to system indices that Kibana needs to function. Use a service account token instead
I manage to create service-account token for example for user elastic/kibana, but how can I set it to docker-compose? Is there a specific env variabile that should I use? Or is there a way to make it work without the usage of service account?
Upvotes: 13
Views: 12517
Reputation: 221
You can set your service account token with this environment: "ELASTICSEARCH_SERVICEACCOUNTTOKEN" in your docker-compose. I managed to set service-account token for Kibana in my docker-compose like this:
version: '3.6'
services:
Elasticsearch:
image: elasticsearch:8.10.2
container_name: elasticsearch
restart: always
volumes:
- elastic_data:/usr/share/elasticsearch/data/
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
discovery.type: single-node
ELASTIC_PASSWORD: elastic123
ports:
- '9200:9200'
- '9300:9300'
networks:
- elk
Logstash:
image: logstash:8.10.2
container_name: logstash
restart: always
volumes:
- ./logstash/pipeline:/usr/share/logstash/pipeline:ro
command: logstash -f /usr/share/logstash/pipeline/logstash.conf
depends_on:
- Elasticsearch
ports:
- '9600:9600'
- '5044:5044'
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
ELASTICSEARCH_USERNAME: elastic
ELASTICSEARCH_PASSWORD: elastic123
XPACK_MONITORING_ELASTICSEARCH_USERNAME: elastic
XPACK_MONITORING_ELASTICSEARCH_PASSWORD: elastic123
XPACK_MONITORING_ELASTICSEARCH_HOSTS: "elasticsearch:9200"
XPACK_MONITORING_ENABLED: "true"
networks:
- elk
Kibana:
image: kibana:8.10.2
container_name: kibana
restart: always
ports:
- '5601:5601'
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
- ELASTICSEARCH_SERVICEACCOUNTTOKEN=MY_TOKEN
depends_on:
- Elasticsearch
networks:
- elk
volumes:
elastic_data: {}
networks:
elk:
My logstash.conf file:
input {
tcp {
port => "5044"
type => syslog
codec => json_lines
}
}
output {
stdout { # This will log all messages so that we can confirm that Logstash is receiving them
codec => rubydebug
}
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "logstash"
user => "elastic"
password => "elastic123"
}
}
Also I used this command to create the service-account token for kibana:
curl -X POST -u elastic:elastic123 "localhost:9200/_security/service/elastic/kibana/credential/token/token1?pretty"
Upvotes: 6
Reputation: 390
I stumbled upon the same issue and tried using the kibana_admin
and kibana_system
built-in users but that didn't work either. Maybe you can set the password for these users but I was not able to.
The elastic
user role is not allowed to have system-index write-access
which Kibana needs. This is based on a change by Elastic (Link to Pullrequest).
You should instead use Service Accounts as described in the docs for Service Accounts. Apparently, according to the docs on creating a Service Account Token, you would have to somehow create the Elasticsearch container and create a token before starting the Kibana container. This is also discussed on the Elasticsearch forums.
Downgrading and using a previous ELK version is also a possibility and is what I did, since I only need the cluster for local development.
Upvotes: 7