Reputation: 107
On my AWS EC2 linux server I'am running one ELK stack, where logstash is transforming postgress database and importing into Elasticsearch. This setup is currently in use for my development environment. We have come to a point where we created staging environment and so we also need probably a separate ELK stack for staging, since we don't want to mix the data from 2 separate databases (dev and stage).
I have quite minor experience in ELK, I have checked some options but did not find any solution to this problem.
What I have tried is to create another docker-compose
file with different container names and ports. When I run docker-compose.elastic.dev.yml
it normally creates first ELK stack. Then I run docker-compose.elastic.stage.yml
but it starts to recreate
existing ELK containers. I have tried to play with docker-compose settings but no luck so far. Any suggestions?
Just for reference, kibana is not included in dev, because we dont need it there.
docker-compose.elastic.stage.yml
version: '3.7'
services:
elasticsearch-stage:
container_name: elasticsearch-stage
image: docker.elastic.co/elasticsearch/elasticsearch:7.10.2
ports:
- 9400:9200
environment:
- http.cors.enabled=true
- http.cors.allow-origin=*
- http.cors.allow-methods=OPTIONS,HEAD,GET,POST,PUT,DELETE
- http.cors.allow-headers=X-Requested-With,X-Auth-Token,Content-Type,Content-Length,Authorization
- transport.host=127.0.0.1
- cluster.name=docker-cluster
- discovery.type=single-node
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
volumes:
- elasticsearch_data_stage:/usr/share/elasticsearch/data
networks:
- api_network
kibana-stage:
container_name: kibana-stage
image: docker.elastic.co/kibana/kibana:7.10.2
ports:
- 5601:5601
networks:
- api_network
depends_on:
- elasticsearch-stage
logstash-stage:
container_name: logstash-stage
ports:
- 5045:5045
build:
dockerfile: Dockerfile.logstash
context: .
environment:
LOGSTASH_JDBC_URL: "jdbc:postgresql://serverip:15433/name"
LOGSTASH_JDBC_USERNAME: "name"
LOGSTASH_JDBC_PASSWORD: "password"
LOGSTASH_ELASTICSEARCH_HOST: "http://elasticsearch-stage:9200"
volumes:
- ./logstash.conf:/usr/share/logstash/pipeline/logstash.conf
- ./offers_template.json:/usr/share/logstash/templates/offers_template.json
- ./offers_query.sql:/usr/share/logstash/queries/offers_query.sql
logging:
driver: "json-file"
options:
max-size: "200m"
max-file: "5"
networks:
- api_network
depends_on:
- elasticsearch-stage
- kibana-stage
volumes:
elasticsearch_data_stage:
networks:
api_network:
name: name_api_network_stage
docker-compose.elastic.dev.yml
version: '3.7'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.10.2
ports:
- 9200:9200
environment:
- http.cors.enabled=true
- http.cors.allow-origin=*
- http.cors.allow-methods=OPTIONS,HEAD,GET,POST,PUT,DELETE
- http.cors.allow-headers=X-Requested-With,X-Auth-Token,Content-Type,Content-Length,Authorization
- transport.host=127.0.0.1
- cluster.name=docker-cluster
- discovery.type=single-node
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
volumes:
- elasticsearch_data:/usr/share/elasticsearch/data
networks:
- api_network
logstash:
build:
dockerfile: Dockerfile.logstash
context: .
environment:
LOGSTASH_JDBC_URL: "jdbc:postgresql://serverip:15432/username"
LOGSTASH_JDBC_USERNAME: "username"
LOGSTASH_JDBC_PASSWORD: "password"
LOGSTASH_ELASTICSEARCH_HOST: "http://elasticsearch:9200"
volumes:
- ./logstash.conf:/usr/share/logstash/pipeline/logstash.conf
- ./offers_template.json:/usr/share/logstash/templates/offers_template.json
- ./offers_query.sql:/usr/share/logstash/queries/offers_query.sql
logging:
driver: "json-file"
options:
max-size: "200m"
max-file: "5"
networks:
- api_network
depends_on:
- elasticsearch
volumes:
elasticsearch_data:
networks:
api_network:
name: name_api_network
I have also found this article and seems like is similar/same problem, unfortunately topic was closed without confirmed solution.
logstash.conf
input {
jdbc {
jdbc_driver_library => "/usr/share/logstash/logstash-core/lib/jars/postgresql.jar"
jdbc_driver_class => "org.postgresql.Driver"
jdbc_connection_string => "${LOGSTASH_JDBC_URL}"
jdbc_user => "${LOGSTASH_JDBC_USERNAME}"
jdbc_password => "${LOGSTASH_JDBC_PASSWORD}"
lowercase_column_names => false
schedule => "* * * * *"
statement_filepath => "/usr/share/logstash/queries/offers_query.sql"
}
}
filter {
json {
source => "name"
target => "name"
}
json {
source => "description"
target => "description"
}
...
...
}
output {
elasticsearch {
hosts => ["${LOGSTASH_ELASTICSEARCH_HOST}"]
index => "offers"
document_id => "%{id}"
manage_template => true
template_name => "offers"
template => "/usr/share/logstash/templates/offers_template.json"
template_overwrite => true
}
stdout { codec => json_lines }
}
UPDATE:
I found out in here, that if not running default logstash configuration, I need to set XPACK_MONITORING_ENABLED: "false"
for logstash environment
and the error from logstash not being able to connect to elasticsearch was gone, but still logstash did not do its job of processing data from the DB as it normally should. What's happening now is in logstash logs, every few minutes there is just plain query text loaded from the offers_query.sql
. When I enter elasticsearch_server_ip:9400
I get this output (so it should be running):
{
"name" : "30ac276f0846",
"cluster_name" : "docker-cluster",
"cluster_uuid" : "14mxQTP7S32o-rIrjYSsXw",
"version" : {
"number" : "7.10.2",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "747e1cc71def077253878a59143c1f785afa92b9",
"build_date" : "2021-01-13T00:42:12.435326Z",
"build_snapshot" : false,
"lucene_version" : "8.7.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
Upvotes: 0
Views: 622
Reputation: 48
As far as I can understand you still have the same names of services in files and that is making docker-compose up -d
confused.
Your problem is the naming of services inside of the docker-compose file.
services:
elasticsearch
logstash
It's the same on dev and staging compose, and since you are not running swarm you will need following: Separate docker-composes to different folders so docker-compose can create different container names.
And yes, you can't have the same ports on the host port forwarded
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.10.2
ports:
- 9200:9200
One elastic search should have 9400:9200 or something similar.
Upvotes: 1