Reputation: 117
I have a Docker Compose setup with NginX, ElasticSearch and Kibana like the following:
web:
build:
context: .
dockerfile: ./system/docker/development/web.Dockerfile
depends_on:
- app
volumes:
- './system/ssl:/etc/ssl/certs'
networks:
- mynet
ports:
- 80:80
- 443:443
elasticsearch_1:
image: docker.elastic.co/elasticsearch/elasticsearch:7.7.0
container_name: "${COMPOSE_PROJECT_NAME:-service}_elasticsearch_1"
environment:
- node.name=elasticsearch_1
- cluster.name=es-docker-cluster
- discovery.seed_hosts=elasticsearch_2,elasticsearch_3
- cluster.initial_master_nodes=elasticsearch_1,elasticsearch_2,elasticsearch_3
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- es_volume_1:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- mynet
elasticsearch_2:
image: docker.elastic.co/elasticsearch/elasticsearch:7.7.0
container_name: "${COMPOSE_PROJECT_NAME:-service}_elasticsearch_2"
environment:
- node.name=elasticsearch_2
- cluster.name=es-docker-cluster
- discovery.seed_hosts=elasticsearch_1,elasticsearch_3
- cluster.initial_master_nodes=elasticsearch_1,elasticsearch_2,elasticsearch_3
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- es_volume_2:/usr/share/elasticsearch/data
ports:
- 9201:9201
networks:
- mynet
elasticsearch_3:
image: docker.elastic.co/elasticsearch/elasticsearch:7.7.0
container_name: "${COMPOSE_PROJECT_NAME:-service}_elasticsearch_3"
environment:
- node.name=elasticsearch_3
- cluster.name=es-docker-cluster
- discovery.seed_hosts=elasticsearch_1,elasticsearch_2
- cluster.initial_master_nodes=elasticsearch_1,elasticsearch_2,elasticsearch_3
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- es_volume_3:/usr/share/elasticsearch/data
ports:
- 9202:9202
networks:
- mynet
kibana:
image: docker.elastic.co/kibana/kibana:7.7.0
container_name: "${COMPOSE_PROJECT_NAME:-service}_kibana"
ports:
- 5601:5601
environment:
ELASTICSEARCH_URL: http://elasticsearch_1:9200
ELASTICSEARCH_HOSTS: http://elasticsearch_1:9200
networks:
- mynet
volumes:
es_volume_1: null
es_volume_2: null
es_volume_3: null
networks:
mynet:
driver: bridge
ipam:
config:
- subnet: 172.18.0.0/24
gateway: 172.18.0.1
When I (build and) run this using docker-compose up
I'm able to access Kibana through URL http://localhost:5601/
but when I try to setup a reverse proxy for the same using NginX, I get a 502 Bad Gateway error. Here's my NginX config file:
server {
listen 80;
listen 443 ssl http2;
ssl_certificate /ssl/localhost.crt;
ssl_certificate_key /ssl/localhost.key;
...
location /app/kibana {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location ~ /\. {
deny all;
}
...
}
What I'm trying to do here is be able to access Kibana like http://localhost/app/kibana
. The articles I've gone through (like this) seem to be focused more on securing Kibana access through NginX (using Basic Auth) rather than the ability to access on a particular path on port 80.
Update
So, I changed localhost
to kibana
(as suggested by @mikezter) and now it seems to be able to at least find the Kibana service (so there's no more 502 error).
However, then I encountered a blank page with a few errors in browser debug console. Upon searching, I came across this location directive:
location ~ (/app|/translations|/node_modules|/built_assets/|/bundles|/es_admin|/plugins|/api|/ui|/elasticsearch|/spaces/enter) {
proxy_pass http://kibana:5601;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $http_host;
proxy_set_header Authorization "";
proxy_hide_header Authorization;
}
Now the page loads and there is some UI, but there's still some issue with the scripting, so the page is not available for user interaction.
Upvotes: 0
Views: 4252
Reputation: 282
First create a site-file for Nginx:
$ sudo nano /etc/nginx/sites-available/kibana.example.com
$ sudo ln -s /etc/nginx/sites-available/kibana.example.com /etc/nginx/sites-enabled/
Put the following into it:
server {
listen 80;
client_max_body_size 4G;
server_name kibana.example.com;
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_redirect off;
proxy_buffering off;
proxy_pass http://kibana_server;
}
}
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
upstream kibana_server {
server 127.0.0.1:5601;
}
In your docker-compose.yml, serve Kibana only locally on the host:
services:
...
kibana:
ports:
- "127.0.0.1:5601:5601"
...
Execute docker compose up -d
Run sanity check for your nginx configuration: sudo nginx -t
.
Reload nginx: sudo systemctl restart nginx
.
Access your kibana server at http://kibana.example.org.
PS: Its implied that kibana.example.org is just a placeholder domain name.
Upvotes: 0
Reputation: 357
I understand this might not be a total fix for your problem for the second part of the problem but using the following
ELK_VERSION=7.12.0
Kibana seems to work well on the default route '/'
The below worked for me.
server {
listen 80 default_server;
server_name $hostname;
location / {
proxy_pass http://kibana:5601;
# kindly add your header config that works for you
}
}
I think it has to do with the way you're configuring your nginx location regex match.
The configuration I eventually went with was to enable nginx listen on multiple ports.
so I isolated by port exposed by kibanna which listen on the default route.
E.g. in my nginx.conf
server {
listen 80 default_server;
server_name $hostname;
location / {
proxy_pass http://identity-api;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
}
}
server {
listen 81;
server_name $hostname;
location / {
proxy_pass http://kibana:5601;
# kindly add your header config that works for you
}
}
Lastly I update my nginx port in docker-compose
nginx-reverseproxy:
ports:
- "80:80"
- "81:81"
Upvotes: 0
Reputation: 2463
You are connecting all the containers in this config via container network. Look at the environment variables set in the Kibana config:
ELASTICSEARCH_URL: http://elasticsearch_1:9200
Here you can see, that the hostname of the other container running ElasticSearch is elasticsearch_1
. In a similar manner, the hostname of the container running Kibana woud be kibana
. These hostnames are only availiable inside the container network.
So in your Nginx config, you'll have to proxy_pass to http://kibana:5601
instead of localhost
.
Upvotes: 2