coredump
coredump

Reputation: 133

Kibana on Kubernetes - how to point to ES container running on a different pod

Learning Kubernetes by setting up two pods, each running an elastic-search and a kibana container respectively.

My configuration file is able to setup both pods as well as create two services to access these applications on host machine's web browser.

Issue is that i don't know how to make Kibana container communicate with ES application/pod.

Earlier while learning Docker i crafted a docker-compose app configuration and now basically trying to do the same using Kubernetes ( docker-compose config pasted below ) .

Came across a blog that suggested using Deployment instead of Pod. Again not sure how would one make Kibana talk to ES

Kubernetes configuation yaml:

apiVersion: v1
kind: Pod
metadata:
  name: pod-elasticsearch
  labels:
      app: myapp
spec:
  hostname: "es01-docker-local"

  containers:
    - name: myelasticsearch-container
      image: myelasticsearch
      imagePullPolicy: Never
      volumeMounts:
        - name: my-volume
          mountPath: /home/newuser

  volumes:
    - name: my-volume
      emptyDir: {}

---

apiVersion: v1
kind: Service

metadata:
    name: myelasticsearch-service
spec:
    type: NodePort
    ports:
        - targetPort: 9200
          port: 9200
          nodePort: 30015
    selector:
        app: myapp

---

apiVersion: v1
kind: Pod
metadata:
  name: pod-kibana
  labels:
      app: myapp
spec:
  containers:
    - name: mykibana-container
      image: mykibana
      imagePullPolicy: Never
      volumeMounts:
        - name: my-volume
          mountPath: /home/newuser
  volumes:
    - name: my-volume
      emptyDir: {}

---

apiVersion: v1
kind: Service
metadata:
    name: mykibana-service
spec:
    type: NodePort
    ports:
        - targetPort: 5601
          port: 5601
          nodePort: 30016
    selector:
        app: myapp

For reference below is the docker-compose that i am trying to replicate on Kubernetes

version: "2.2"

services:

  elasticsearch:
    image: myelasticsearch
    container_name: myelasticsearch-container
    restart: always
    hostname: 'es01.docker.local'
    ports:
      - '9200:9200'
      - '9300:9300'
    volumes:
      - myVolume:/home/newuser/
    environment:
      - discovery.type=single-node


  kibana:
    depends_on:
      - elasticsearch
    image: mykibana
    container_name: mykibana-container
    restart: always
    ports:
      - '5601:5601'
    volumes:
      - myVolume:/home/newuser/
    environment:
      ELASTICSEARCH_URL: http://es01:9200
      ELASTICSEARCH_HOSTS: http://es01:9200
volumes:
  myVolume:

networks:
  myNetwork:

ES Pod description:

% kubectl describe pod/pod-elasticsearch
Name:         pod-elasticsearch
Namespace:    default
Priority:     0
Node:         docker-desktop/192.168.65.3
Start Time:   Sun, 10 Jan 2021 23:06:18 -0800
Labels:       app=myapp
Annotations:  <none>
Status:       Running
IP:           10.x.0.yy
IPs:
  IP:  10.x.0.yy

Upvotes: 0

Views: 1686

Answers (2)

skap
skap

Reputation: 513

If you want to deploy Elasticsearch and Kibana in Kubernetes the usual way then you have to take care of some core Elasticsearch cluster configuration like:

  • cluster.initial_master_nodes [7.0] Added in 7.0.
  • network.host
  • network.publish_host

Also you would have to carefully setup the network.host so that even after accidental pod restarts the network.host remains the same.

While deploying Kibana you need provide Elasticsearch service and also manually configure the SSL certificates if Elasticsearch has SSL enabled.

So to install Elastic Stack on Kubernetes then you should probably prefer Elastic Cloud on Kubernetes (ECK). The documentation provided by Elastic is easy to understand.

Elastic Cloud on Kubernetes (ECK) uses Kubernetes Operators to make installation easier and it automatically takes care of core cluster configuration.

ECK installation will create a default user called "elastic" and you can retrieve its password from secrets. It also creates self-signed certificates which can be found in secrets.

For deploying Kibana you can just provide "elasticsearchRef" in your YAML file and it will automatically configure the Elasticsearch endpoints. You can use the default "elastic" user to login to Kibana.

Upvotes: 1

Aschay
Aschay

Reputation: 374

In kubernetes Pod/Deployment/DaemonSet... in the same cluster can communicate with each other with no problem because it has a flat network architecture .One way for these resources to call each other directly is by the name of Kubernetes service of each resource. For example any resource in the cluster can call your kibana-app directly by service name you give it to it mykibana-service.name-of-namespace.

So for kibana pod to communicate with elasticsearch it can use http://name-of-service-of-elasticsearch.name-of-namespace:9200 namespace is be default if you dont specify where you create your service => http://name-of-service-of-elasticsearch.default:9200 or http://name-of-service-of-elasticsearch:9200

The concern you raised on what type of your resource you have to create (pod, deployment,daemonset or statefulSet) is not important for these resources to communicate with each other.

If you re having problem converting docker-compose to manifest file you can start with Kompose you can do kompose convert where is your docker-compose is located .

Here sample

---
apiVersion: apps/v1
kind: Deployment 
metadata:
  labels:
    app: elasticsearch
  name: elasticsearch
  namespace: default
spec:
  selector:
    matchLabels:
      app: elasticsearch
  template:
    metadata:
      labels:
        app: elasticsearch
    spec:
      containers:
      - image: myelasticsearch:yourtag #fix this 
        name: elasticsearch
        ports:
        - containerPort: 9200
        - containerPort: 9300
        volumeMounts:
        - mountPath: /home/newuser/
          name: my-volume
      volumes:
      - name: my-volume
        emptyDir: {}  # I wouldnt use emptydir
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: elasticsearch
  name: elasticsearch
  namespace: default
spec:
  ports:
  - port: 9200
    name: "9200"
    targetPort: 9200
  - port: 9300
    name: "9300"
    targetPort: 9300
  selector:
    app: elasticsearch
  type: ClusterIP #you dont need to make expose your service publicly 

  #####################################
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: kibana
  name: kibana
  namespace: default
spec:
  selector:
    matchLabels:
      app: kibana
  template:
    metadata:
      labels:
        app: kibana
    spec:
      containers:
      - env:
        - name: ELASTICSEARCH_URL
          value: http://elasticsearch:9200/ #elasticsearch is the same name as service resrouce name  
        - name: ELASTICSEARCH_HOSTS
          value: http://elasticsearch:9200  
        image: mykibana:yourtagname #fix this 
        name: kibana
        
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: kibana
  name: kibana
  namespace: default
spec:
  ports:
  - port: 5601
    protocol: TCP
    targetPort: 5601
  selector:
    app: kibana
  type: NodePort

You can choose whats adequate for your app , for example in elasticsearch you can use StatefulSet ,Deployment, in ElasticSearch, and you can you use Deployment for Kibana , Also you can change the type of volume . Also the mynetwork that you created in docker-compose can be translated network policy where you can isolate your resources (for example isolated mynetwork namespace) because these resources are not isolated if they are created in the same cluster by default.

Hope I helped

Upvotes: 1

Related Questions